[jira] [Commented] (FLINK-6075) Support Limit/Top(Sort) for Stream SQL

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024239#comment-16024239
 ] 

ASF GitHub Bot commented on FLINK-6075:
---

Github user rtudoran commented on a diff in the pull request:

https://github.com/apache/flink/pull/3889#discussion_r118423723
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/SortUtil.scala
 ---
@@ -0,0 +1,345 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.runtime.aggregate
+
+import org.apache.flink.table.calcite.FlinkTypeFactory
+import org.apache.flink.types.Row
+import org.apache.calcite.rel.`type`._
+import org.apache.calcite.rel.RelCollation
+import org.apache.flink.streaming.api.functions.ProcessFunction
+import org.apache.flink.table.functions.AggregateFunction
+import org.apache.calcite.sql.`type`.SqlTypeName
+import org.apache.flink.table.api.TableException
+import org.apache.calcite.sql.`type`.SqlTypeName
+import org.apache.calcite.sql.`type`.SqlTypeName._
+import java.util.{ List => JList, ArrayList }
+import org.apache.flink.api.common.typeinfo.{ SqlTimeTypeInfo, 
TypeInformation }
+import org.apache.flink.api.java.typeutils.RowTypeInfo
+import java.sql.Timestamp
+import org.apache.calcite.rel.RelFieldCollation
+import org.apache.calcite.rel.RelFieldCollation.Direction
+import java.util.Comparator
+import org.apache.flink.api.common.typeutils.TypeComparator
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo._
+import 
java.lang.{Byte=>JByte,Integer=>JInt,Long=>JLong,Double=>JDouble,Short=>JShort,String=>JString,Float=>JFloat}
+import java.math.{BigDecimal=>JBigDecimal}
+import org.apache.flink.api.common.functions.MapFunction
+import org.apache.flink.api.common.operators.Order
+import org.apache.calcite.rex.{RexLiteral, RexNode}
+import org.apache.flink.api.common.ExecutionConfig
+import org.apache.flink.api.common.typeinfo.AtomicType
+import org.apache.flink.api.java.typeutils.runtime.RowComparator
+import org.apache.flink.api.common.typeutils.TypeSerializer
+import org.apache.flink.table.runtime.types.{CRow, CRowTypeInfo}
+
+import scala.collection.JavaConverters._
+
+/**
+ * Class represents a collection of helper methods to build the sort logic.
+ * It encapsulates as well the implementation for ordering and generic 
interfaces
+ */
+
+object SortUtil {
+
+  
+  /**
+   * Function creates 
[org.apache.flink.streaming.api.functions.ProcessFunction] for sorting 
+   * elements based on rowtime and potentially other fields
+   * @param collationSort The Sort collation list
+   * @param inputType input row type
+   * @param execCfg table environment execution configuration
+   * @return org.apache.flink.streaming.api.functions.ProcessFunction
+   */
+  private[flink] def createRowTimeSortFunction(
+collationSort: RelCollation,
+inputType: RelDataType,
+inputTypeInfo: TypeInformation[Row],
+execCfg: ExecutionConfig): ProcessFunction[CRow, CRow] = {
+
+val keySortFields = getSortFieldIndexList(collationSort)
+val keySortDirections = getSortFieldDirectionList(collationSort)
+
+   //drop time from comparison as we sort on time in the states and 
result emission
+val keyIndexesNoTime = keySortFields.slice(1, keySortFields.size)
+val keyDirectionsNoTime = keySortDirections.slice(1, 
keySortDirections.size)
+val booleanOrderings = getSortFieldDirectionBooleanList(collationSort)
+val booleanDirectionsNoTime = booleanOrderings.slice(1, 
booleanOrderings.size)
+
+val fieldComps = createFieldComparators(inputType, 
+keyIndexesNoTime, keyDirectionsNoTime, execCfg)
+val fieldCompsRefs = 
fieldComps.asInstanceOf[Array[TypeComparator[AnyRef]]]
+
+val rowComp = createRowComparator(inputType,
+  

[GitHub] flink pull request #3889: [FLINK-6075] - Support Limit/Top(Sort) for Stream ...

2017-05-24 Thread rtudoran
Github user rtudoran commented on a diff in the pull request:

https://github.com/apache/flink/pull/3889#discussion_r118423723
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/SortUtil.scala
 ---
@@ -0,0 +1,345 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.runtime.aggregate
+
+import org.apache.flink.table.calcite.FlinkTypeFactory
+import org.apache.flink.types.Row
+import org.apache.calcite.rel.`type`._
+import org.apache.calcite.rel.RelCollation
+import org.apache.flink.streaming.api.functions.ProcessFunction
+import org.apache.flink.table.functions.AggregateFunction
+import org.apache.calcite.sql.`type`.SqlTypeName
+import org.apache.flink.table.api.TableException
+import org.apache.calcite.sql.`type`.SqlTypeName
+import org.apache.calcite.sql.`type`.SqlTypeName._
+import java.util.{ List => JList, ArrayList }
+import org.apache.flink.api.common.typeinfo.{ SqlTimeTypeInfo, 
TypeInformation }
+import org.apache.flink.api.java.typeutils.RowTypeInfo
+import java.sql.Timestamp
+import org.apache.calcite.rel.RelFieldCollation
+import org.apache.calcite.rel.RelFieldCollation.Direction
+import java.util.Comparator
+import org.apache.flink.api.common.typeutils.TypeComparator
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo._
+import 
java.lang.{Byte=>JByte,Integer=>JInt,Long=>JLong,Double=>JDouble,Short=>JShort,String=>JString,Float=>JFloat}
+import java.math.{BigDecimal=>JBigDecimal}
+import org.apache.flink.api.common.functions.MapFunction
+import org.apache.flink.api.common.operators.Order
+import org.apache.calcite.rex.{RexLiteral, RexNode}
+import org.apache.flink.api.common.ExecutionConfig
+import org.apache.flink.api.common.typeinfo.AtomicType
+import org.apache.flink.api.java.typeutils.runtime.RowComparator
+import org.apache.flink.api.common.typeutils.TypeSerializer
+import org.apache.flink.table.runtime.types.{CRow, CRowTypeInfo}
+
+import scala.collection.JavaConverters._
+
+/**
+ * Class represents a collection of helper methods to build the sort logic.
+ * It encapsulates as well the implementation for ordering and generic 
interfaces
+ */
+
+object SortUtil {
+
+  
+  /**
+   * Function creates 
[org.apache.flink.streaming.api.functions.ProcessFunction] for sorting 
+   * elements based on rowtime and potentially other fields
+   * @param collationSort The Sort collation list
+   * @param inputType input row type
+   * @param execCfg table environment execution configuration
+   * @return org.apache.flink.streaming.api.functions.ProcessFunction
+   */
+  private[flink] def createRowTimeSortFunction(
+collationSort: RelCollation,
+inputType: RelDataType,
+inputTypeInfo: TypeInformation[Row],
+execCfg: ExecutionConfig): ProcessFunction[CRow, CRow] = {
+
+val keySortFields = getSortFieldIndexList(collationSort)
+val keySortDirections = getSortFieldDirectionList(collationSort)
+
+   //drop time from comparison as we sort on time in the states and 
result emission
+val keyIndexesNoTime = keySortFields.slice(1, keySortFields.size)
+val keyDirectionsNoTime = keySortDirections.slice(1, 
keySortDirections.size)
+val booleanOrderings = getSortFieldDirectionBooleanList(collationSort)
+val booleanDirectionsNoTime = booleanOrderings.slice(1, 
booleanOrderings.size)
+
+val fieldComps = createFieldComparators(inputType, 
+keyIndexesNoTime, keyDirectionsNoTime, execCfg)
+val fieldCompsRefs = 
fieldComps.asInstanceOf[Array[TypeComparator[AnyRef]]]
+
+val rowComp = createRowComparator(inputType,
+keyIndexesNoTime, fieldCompsRefs, booleanDirectionsNoTime)
+val collectionRowComparator = new CollectionRowComparator(rowComp)
+
+val inputCRowType = CRowTypeInfo(inputTypeInfo)
+ 
+new 

[jira] [Updated] (FLINK-6713) Document how to allow multiple Kafka consumers / producers to authenticate using different credentials

2017-05-24 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-6713:
---
Description: 
The doc improvements should include:

1. Clearly state that the built-in JAAS security module in Flink is a JVM 
process-wide static JAAS file installation (all static JAAS files are, not 
Flink specific), and therefore only allows all Kafka consumers and producers in 
a single JVM (and therefore the whole job, since we do not allow assigning 
operators to specific slots) to authenticate as one single user.

2. If Kerberos authentication is used: self-ship multiple keytab files, and use 
Kafka's dynamic JAAS configuration through client properties to point to 
separate keytabs for each consumer / producer. Note that ticket cache would 
never work for multiple authentications.

3. If plain simple login is used: Kafka's dynamic JAAS configuration should be 
used (and is the only way to do so).

  was:
The doc improvements should include:

1. Clearly state that the built-in JAAS security module in Flink is a JVM 
process-wide static JAAS file installation (all static JAAS files are, not 
Flink specific), and therefore only allows all Kafka consumers and producers in 
a single JVM (and therefore the whole job, since we do not allow assigning 
operators to specific slots) to authenticate as one single user.

2. If Kerberos authentication is used, 2 approaches: 1) with Flink's built-in 
Kerberos support, multiple user principals need to be merged as a single 
keytab, or 2) self-ship multiple keytab files, and use Kafka's dynamic JAAS 
configuration through client properties to point to separate keytabs for each 
consumer / producer. Note that ticket cache would never work for multiple 
authentications.

3. If plain simple login is used: Kafka's dynamic JAAS configuration should be 
used (and is the only way to do so).


> Document how to allow multiple Kafka consumers / producers to authenticate 
> using different credentials
> --
>
> Key: FLINK-6713
> URL: https://issues.apache.org/jira/browse/FLINK-6713
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Kafka Connector
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>
> The doc improvements should include:
> 1. Clearly state that the built-in JAAS security module in Flink is a JVM 
> process-wide static JAAS file installation (all static JAAS files are, not 
> Flink specific), and therefore only allows all Kafka consumers and producers 
> in a single JVM (and therefore the whole job, since we do not allow assigning 
> operators to specific slots) to authenticate as one single user.
> 2. If Kerberos authentication is used: self-ship multiple keytab files, and 
> use Kafka's dynamic JAAS configuration through client properties to point to 
> separate keytabs for each consumer / producer. Note that ticket cache would 
> never work for multiple authentications.
> 3. If plain simple login is used: Kafka's dynamic JAAS configuration should 
> be used (and is the only way to do so).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6713) Document how to allow multiple Kafka consumers / producers to authenticate using different credentials

2017-05-24 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-6713:
--

 Summary: Document how to allow multiple Kafka consumers / 
producers to authenticate using different credentials
 Key: FLINK-6713
 URL: https://issues.apache.org/jira/browse/FLINK-6713
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Kafka Connector
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai


The doc improvements should include:

1. Clearly state that the built-in JAAS security module in Flink is a JVM 
process-wide static JAAS file installation (all static JAAS files are, not 
Flink specific), and therefore only allows all Kafka consumers and producers in 
a single JVM (and therefore the whole job, since we do not allow assigning 
operators to specific slots) to authenticate as one single user.

2. If Kerberos authentication is used, 2 approaches: 1) with Flink's built-in 
Kerberos support, multiple user principals need to be merged as a single 
keytab, or 2) self-ship multiple keytab files, and use Kafka's dynamic JAAS 
configuration through client properties to point to separate keytabs for each 
consumer / producer. Note that ticket cache would never work for multiple 
authentications.

3. If plain simple login is used: Kafka's dynamic JAAS configuration should be 
used (and is the only way to do so).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6712) Bump Kafka010 version to 0.10.2.0

2017-05-24 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-6712:
--

 Summary: Bump Kafka010 version to 0.10.2.0
 Key: FLINK-6712
 URL: https://issues.apache.org/jira/browse/FLINK-6712
 Project: Flink
  Issue Type: Improvement
  Components: Kafka Connector
Reporter: Tzu-Li (Gordon) Tai


The main reason for the default version bump is to allow dynamic JAAS 
configurations using client properties (see 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-85%3A+Dynamic+JAAS+configuration+for+Kafka+clients),
 which will make it possible to have multiple Kafka consumers / producers in 
the same job to have different authentication settings.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6707) Activate strict checkstyle for flink-examples

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023979#comment-16023979
 ] 

ASF GitHub Bot commented on FLINK-6707:
---

GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/3986

[FLINK-6707] [examples] Activate strict checkstyle for flink-examples



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
6707_activate_strict_checkstyle_for_flink_examples

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3986.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3986


commit 84922daf25f7a0d887e9a0b85d336a24cab5e37a
Author: Greg Hogan 
Date:   2017-05-24T17:24:02Z

[FLINK-6707] [examples] Activate strict checkstyle for flink-examples

commit f70118e5356fb28a2778dd1603e77cfc6603b1ab
Author: Greg Hogan 
Date:   2017-05-24T19:31:45Z

Rename packages




> Activate strict checkstyle for flink-examples
> -
>
> Key: FLINK-6707
> URL: https://issues.apache.org/jira/browse/FLINK-6707
> Project: Flink
>  Issue Type: Sub-task
>  Components: Examples
>Affects Versions: 1.4.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3986: [FLINK-6707] [examples] Activate strict checkstyle...

2017-05-24 Thread greghogan
GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/3986

[FLINK-6707] [examples] Activate strict checkstyle for flink-examples



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
6707_activate_strict_checkstyle_for_flink_examples

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3986.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3986


commit 84922daf25f7a0d887e9a0b85d336a24cab5e37a
Author: Greg Hogan 
Date:   2017-05-24T17:24:02Z

[FLINK-6707] [examples] Activate strict checkstyle for flink-examples

commit f70118e5356fb28a2778dd1603e77cfc6603b1ab
Author: Greg Hogan 
Date:   2017-05-24T19:31:45Z

Rename packages




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3967: [FLINK-6669] Scala style check errror on Windows

2017-05-24 Thread lingjinjiang
Github user lingjinjiang closed the pull request at:

https://github.com/apache/flink/pull/3967


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6669) [Build] Scala style check errror on Windows

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023937#comment-16023937
 ] 

ASF GitHub Bot commented on FLINK-6669:
---

Github user lingjinjiang commented on the issue:

https://github.com/apache/flink/pull/3967
  
@zentol Thank for your review. I'll close it.


> [Build] Scala style check errror on Windows
> ---
>
> Key: FLINK-6669
> URL: https://issues.apache.org/jira/browse/FLINK-6669
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
> Environment: Windows
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: FLINK-6669.patch
>
>
> When build the source code on Windows, a scala style check error happend.
> Here is the error messages.
> [INFO]
> [INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ flink-scala_2.10 
> ---
> error 
> file=E:\github\flink\flink-scala\src\main\scala\org\apache\flink\api\scala\utils\package.scala
>  message=Input length = 2
> Saving to outputFile=E:\github\flink\flink-scala\target\scalastyle-output.xml
> Processed 78 file(s)
> Found 1 errors
> Found 0 warnings
> Found 0 infos
> Finished in 1189 ms
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] force-shading .. SUCCESS [ 37.206 
> s]
> [INFO] flink .. SUCCESS [03:27 
> min]
> [INFO] flink-annotations .. SUCCESS [  3.020 
> s]
> [INFO] flink-shaded-hadoop  SUCCESS [  0.928 
> s]
> [INFO] flink-shaded-hadoop2 ... SUCCESS [ 15.314 
> s]
> [INFO] flink-shaded-hadoop2-uber .. SUCCESS [ 13.085 
> s]
> [INFO] flink-shaded-curator ... SUCCESS [  0.234 
> s]
> [INFO] flink-shaded-curator-recipes ... SUCCESS [  3.336 
> s]
> [INFO] flink-shaded-curator-test .. SUCCESS [  2.948 
> s]
> [INFO] flink-metrics .. SUCCESS [  0.286 
> s]
> [INFO] flink-metrics-core . SUCCESS [  9.065 
> s]
> [INFO] flink-test-utils-parent  SUCCESS [  0.327 
> s]
> [INFO] flink-test-utils-junit . SUCCESS [  1.452 
> s]
> [INFO] flink-core . SUCCESS [ 54.277 
> s]
> \[INFO\] flink-java . SUCCESS [ 
> 25.244 s]
> [INFO] flink-runtime .. SUCCESS [03:08 
> min]
> [INFO] flink-optimizer  SUCCESS [ 14.540 
> s]
> [INFO] flink-clients .. SUCCESS [ 14.457 
> s]
> [INFO] flink-streaming-java ... SUCCESS [ 58.130 
> s]
> [INFO] flink-test-utils ... SUCCESS [ 19.906 
> s]
> [INFO] flink-scala  FAILURE [ 56.634 
> s]
> [INFO] flink-runtime-web .. SKIPPED
> I think this is caused by the Windows default encoding. When I set the 
> inputEncoding to UTF-8 in scalastyle-maven-plugin, the error don't happen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6669) [Build] Scala style check errror on Windows

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023938#comment-16023938
 ] 

ASF GitHub Bot commented on FLINK-6669:
---

Github user lingjinjiang closed the pull request at:

https://github.com/apache/flink/pull/3967


> [Build] Scala style check errror on Windows
> ---
>
> Key: FLINK-6669
> URL: https://issues.apache.org/jira/browse/FLINK-6669
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
> Environment: Windows
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: FLINK-6669.patch
>
>
> When build the source code on Windows, a scala style check error happend.
> Here is the error messages.
> [INFO]
> [INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ flink-scala_2.10 
> ---
> error 
> file=E:\github\flink\flink-scala\src\main\scala\org\apache\flink\api\scala\utils\package.scala
>  message=Input length = 2
> Saving to outputFile=E:\github\flink\flink-scala\target\scalastyle-output.xml
> Processed 78 file(s)
> Found 1 errors
> Found 0 warnings
> Found 0 infos
> Finished in 1189 ms
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] force-shading .. SUCCESS [ 37.206 
> s]
> [INFO] flink .. SUCCESS [03:27 
> min]
> [INFO] flink-annotations .. SUCCESS [  3.020 
> s]
> [INFO] flink-shaded-hadoop  SUCCESS [  0.928 
> s]
> [INFO] flink-shaded-hadoop2 ... SUCCESS [ 15.314 
> s]
> [INFO] flink-shaded-hadoop2-uber .. SUCCESS [ 13.085 
> s]
> [INFO] flink-shaded-curator ... SUCCESS [  0.234 
> s]
> [INFO] flink-shaded-curator-recipes ... SUCCESS [  3.336 
> s]
> [INFO] flink-shaded-curator-test .. SUCCESS [  2.948 
> s]
> [INFO] flink-metrics .. SUCCESS [  0.286 
> s]
> [INFO] flink-metrics-core . SUCCESS [  9.065 
> s]
> [INFO] flink-test-utils-parent  SUCCESS [  0.327 
> s]
> [INFO] flink-test-utils-junit . SUCCESS [  1.452 
> s]
> [INFO] flink-core . SUCCESS [ 54.277 
> s]
> \[INFO\] flink-java . SUCCESS [ 
> 25.244 s]
> [INFO] flink-runtime .. SUCCESS [03:08 
> min]
> [INFO] flink-optimizer  SUCCESS [ 14.540 
> s]
> [INFO] flink-clients .. SUCCESS [ 14.457 
> s]
> [INFO] flink-streaming-java ... SUCCESS [ 58.130 
> s]
> [INFO] flink-test-utils ... SUCCESS [ 19.906 
> s]
> [INFO] flink-scala  FAILURE [ 56.634 
> s]
> [INFO] flink-runtime-web .. SKIPPED
> I think this is caused by the Windows default encoding. When I set the 
> inputEncoding to UTF-8 in scalastyle-maven-plugin, the error don't happen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3967: [FLINK-6669] Scala style check errror on Windows

2017-05-24 Thread lingjinjiang
Github user lingjinjiang commented on the issue:

https://github.com/apache/flink/pull/3967
  
@zentol Thank for your review. I'll close it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6689) Remote StreamExecutionEnvironment fails to submit jobs against LocalFlinkMiniCluster

2017-05-24 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023797#comment-16023797
 ] 

Till Rohrmann commented on FLINK-6689:
--

When trying to submit a job to a {{LocalFlinkMiniCluster}}, then one should now 
use the {{TestEnvironment}} and {{TestStreamEnvironment}}, respectively. The 
{{RemoteStreamExecutionEnvironment}} has no way to retrieve the current leader 
session id. If one wants to use the {{RemoteStreamExecutionEnvironment}}, then 
one has to start a proper Flink cluster.

> Remote StreamExecutionEnvironment fails to submit jobs against 
> LocalFlinkMiniCluster
> 
>
> Key: FLINK-6689
> URL: https://issues.apache.org/jira/browse/FLINK-6689
> Project: Flink
>  Issue Type: Bug
>  Components: Client, Job-Submission
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
> Fix For: 1.3.0
>
>
> The following Flink programs fails to execute with the current 1.3 branch 
> (1.2 works) because the leader session ID being used is wrong:
> {code:java}
> final String jobManagerAddress = "localhost";
> final int jobManagerPort = ConfigConstants.DEFAULT_JOB_MANAGER_IPC_PORT;
> final Configuration config = new Configuration();
>   config.setString(ConfigConstants.JOB_MANAGER_IPC_ADDRESS_KEY, 
> jobManagerAddress);
>   config.setInteger(ConfigConstants.JOB_MANAGER_IPC_PORT_KEY, 
> jobManagerPort);
>   config.setBoolean(ConfigConstants.LOCAL_START_WEBSERVER, true);
> final LocalFlinkMiniCluster cluster = new LocalFlinkMiniCluster(config, 
> false);
> cluster.start(true);
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.createRemoteEnvironment(jobManagerAddress, 
> jobManagerPort);
> env.fromElements(1l).addSink(new DiscardingSink());
> // fails due to leader session id being wrong:
> env.execute("test");
> {code}
> Output from logs contais:
> {code}
> ...
> 16:24:57,551 INFO  org.apache.flink.runtime.webmonitor.JobManagerRetriever
>- New leader reachable under 
> akka.tcp://flink@localhost:6123/user/jobmanager:ff0d56cf-6205-4dd4-a266-03847f4d6944.
> 16:24:57,894 INFO  
> org.apache.flink.streaming.api.environment.RemoteStreamEnvironment  - Running 
> remotely at localhost:6123
> 16:24:58,121 INFO  org.apache.flink.client.program.StandaloneClusterClient
>- Starting client actor system.
> 16:24:58,123 INFO  org.apache.flink.runtime.util.LeaderRetrievalUtils 
>- Trying to select the network interface and address to use by connecting 
> to the leading JobManager.
> 16:24:58,128 INFO  org.apache.flink.runtime.util.LeaderRetrievalUtils 
>- TaskManager will try to connect for 1 milliseconds before falling 
> back to heuristics
> 16:24:58,132 INFO  org.apache.flink.runtime.net.ConnectionUtils   
>- Retrieved new target address localhost/127.0.0.1:6123.
> 16:24:58,258 INFO  akka.event.slf4j.Slf4jLogger   
>- Slf4jLogger started
> 16:24:58,262 INFO  Remoting   
>- Starting remoting
> 16:24:58,375 INFO  Remoting   
>- Remoting started; listening on addresses 
> :[akka.tcp://fl...@nico-work.fritz.box:43413]
> 16:24:58,376 INFO  org.apache.flink.client.program.StandaloneClusterClient
>- Submitting job with JobID: 9bef4793a4b7f4caaad96bd28211cbb9. Waiting for 
> job completion.
> Submitting job with JobID: 9bef4793a4b7f4caaad96bd28211cbb9. Waiting for job 
> completion.
> 16:24:58,382 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Disconnect from JobManager null.
> 16:24:58,398 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Received SubmitJobAndWait(JobGraph(jobId: 
> 9bef4793a4b7f4caaad96bd28211cbb9)) but there is no connection to a JobManager 
> yet.
> 16:24:58,398 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Received job test (9bef4793a4b7f4caaad96bd28211cbb9).
> 16:24:58,429 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Connect to JobManager 
> Actor[akka.tcp://flink@localhost:6123/user/jobmanager#1881889998].
> 16:24:58,432 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Connected to JobManager at 
> Actor[akka.tcp://flink@localhost:6123/user/jobmanager#1881889998] with leader 
> session id ----.
> Connected to JobManager at 
> Actor[akka.tcp://flink@localhost:6123/user/jobmanager#1881889998] with leader 
> session id ----.
> 16:24:58,432 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Sending message to JobManager 
> akka.tcp://flink@localhost:6123/user/jobmanager to submit job 

[jira] [Commented] (FLINK-6699) Activate strict-checkstyle in flink-yarn-tests

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023645#comment-16023645
 ] 

ASF GitHub Bot commented on FLINK-6699:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3985
  
+1 with green from Travis


> Activate strict-checkstyle in flink-yarn-tests
> --
>
> Key: FLINK-6699
> URL: https://issues.apache.org/jira/browse/FLINK-6699
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests, YARN
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3985: [FLINK-6699] Activate strict checkstyle for flink-yarn-te...

2017-05-24 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3985
  
+1 with green from Travis


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6038) Add deep links to Apache Bahir Flink streaming connector documentations

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023637#comment-16023637
 ] 

ASF GitHub Bot commented on FLINK-6038:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3975#discussion_r118245299
  
--- Diff: docs/dev/connectors/index.md ---
@@ -54,6 +54,16 @@ Note also that while the streaming connectors listed in 
this section are part of
 Flink project and are included in source releases, they are not included 
in the binary distributions. 
 Further instructions can be found in the corresponding subsections.
 
+## Connectors in Apache Bahir
+
+The [Apache Bahir](https://bahir.apache.org/) project provides some 
additional streaming connectors for Flink, including:
--- End diff --

Remove "some". Consider reversing and expanding the statement, something 
like "Additional streaming connectors for Flink are being released through 
[Apache Bahir](https://bahir.apache.org/), including:". This could help 
acclimate contributors and treats Bahir less like a third-party.


> Add deep links to Apache Bahir Flink streaming connector documentations
> ---
>
> Key: FLINK-6038
> URL: https://issues.apache.org/jira/browse/FLINK-6038
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Streaming Connectors
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: David Anderson
>
> Recently, the Bahir documentation for Flink streaming connectors in Bahir was 
> added to Bahir's website: BAHIR-90.
> We should add deep links to the individual Bahir connector dos under 
> {{/dev/connectors/overview}}, instead of just shallow links to the source 
> {{README.md}} s in the community ecosystem page.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3975: [FLINK-6038][docs] Added deep links to Bahir conne...

2017-05-24 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3975#discussion_r118245299
  
--- Diff: docs/dev/connectors/index.md ---
@@ -54,6 +54,16 @@ Note also that while the streaming connectors listed in 
this section are part of
 Flink project and are included in source releases, they are not included 
in the binary distributions. 
 Further instructions can be found in the corresponding subsections.
 
+## Connectors in Apache Bahir
+
+The [Apache Bahir](https://bahir.apache.org/) project provides some 
additional streaming connectors for Flink, including:
--- End diff --

Remove "some". Consider reversing and expanding the statement, something 
like "Additional streaming connectors for Flink are being released through 
[Apache Bahir](https://bahir.apache.org/), including:". This could help 
acclimate contributors and treats Bahir less like a third-party.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6711) Activate strict checkstyle for flink-connectors

2017-05-24 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6711:
---

 Summary: Activate strict checkstyle for flink-connectors
 Key: FLINK-6711
 URL: https://issues.apache.org/jira/browse/FLINK-6711
 Project: Flink
  Issue Type: Sub-task
  Components: Batch Connectors and Input/Output Formats, Streaming 
Connectors
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3985: [FLINK-6699] Activate strict checkstyle for flink-...

2017-05-24 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/3985

[FLINK-6699] Activate strict checkstyle for flink-yarn-tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6699_checkstyle_yarnt

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3985.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3985


commit 1c377d1b82047a17d1dfb903d0a6b033b88b7f64
Author: zentol 
Date:   2017-05-24T10:42:28Z

[FLINK-6699] Activate strict checkstyle for flink-yarn-tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6699) Activate strict-checkstyle in flink-yarn-tests

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023602#comment-16023602
 ] 

ASF GitHub Bot commented on FLINK-6699:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/3985

[FLINK-6699] Activate strict checkstyle for flink-yarn-tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6699_checkstyle_yarnt

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3985.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3985


commit 1c377d1b82047a17d1dfb903d0a6b033b88b7f64
Author: zentol 
Date:   2017-05-24T10:42:28Z

[FLINK-6699] Activate strict checkstyle for flink-yarn-tests




> Activate strict-checkstyle in flink-yarn-tests
> --
>
> Key: FLINK-6699
> URL: https://issues.apache.org/jira/browse/FLINK-6699
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests, YARN
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6137) Activate strict checkstyle for flink-cep

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023596#comment-16023596
 ] 

ASF GitHub Bot commented on FLINK-6137:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3976
  
You can rebase this PR now to get rid of the scala import block commit.


> Activate strict checkstyle for flink-cep
> 
>
> Key: FLINK-6137
> URL: https://issues.apache.org/jira/browse/FLINK-6137
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Minor
>
> Add a custom checkstyle.xml for `flink-cep` library as in [FLINK-6107]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3976: [FLINK-6137][cep] Activate strict checkstyle for flink-ce...

2017-05-24 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3976
  
You can rebase this PR now to get rid of the scala import block commit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6692) The flink-dist jar contains unshaded netty jar

2017-05-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023587#comment-16023587
 ] 

Haohui Mai commented on FLINK-6692:
---

The issue also presents in Flink 1.2.0.

> The flink-dist jar contains unshaded netty jar
> --
>
> Key: FLINK-6692
> URL: https://issues.apache.org/jira/browse/FLINK-6692
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 1.3.0
>
>
> The {{flink-dist}} jar contains unshaded netty 3 and netty 4 classes:
> {noformat}
> io/netty/handler/codec/http/router/
> io/netty/handler/codec/http/router/BadClientSilencer.class
> io/netty/handler/codec/http/router/MethodRouted.class
> io/netty/handler/codec/http/router/Handler.class
> io/netty/handler/codec/http/router/Router.class
> io/netty/handler/codec/http/router/DualMethodRouter.class
> io/netty/handler/codec/http/router/Routed.class
> io/netty/handler/codec/http/router/AbstractHandler.class
> io/netty/handler/codec/http/router/KeepAliveWrite.class
> io/netty/handler/codec/http/router/DualAbstractHandler.class
> io/netty/handler/codec/http/router/MethodRouter.class
> {noformat}
> {noformat}
> org/jboss/netty/util/internal/jzlib/InfBlocks.class
> org/jboss/netty/util/internal/jzlib/InfCodes.class
> org/jboss/netty/util/internal/jzlib/InfTree.class
> org/jboss/netty/util/internal/jzlib/Inflate$1.class
> org/jboss/netty/util/internal/jzlib/Inflate.class
> org/jboss/netty/util/internal/jzlib/JZlib$WrapperType.class
> org/jboss/netty/util/internal/jzlib/JZlib.class
> org/jboss/netty/util/internal/jzlib/StaticTree.class
> org/jboss/netty/util/internal/jzlib/Tree.class
> org/jboss/netty/util/internal/jzlib/ZStream$1.class
> org/jboss/netty/util/internal/jzlib/ZStream.class
> {noformat}
> Is it an expected behavior?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6710) Remove twitter-inputformat

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023584#comment-16023584
 ] 

ASF GitHub Bot commented on FLINK-6710:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3984
  
sure. no problem.


> Remove twitter-inputformat
> --
>
> Key: FLINK-6710
> URL: https://issues.apache.org/jira/browse/FLINK-6710
> Project: Flink
>  Issue Type: Improvement
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.4.0
>
>
> I propose removing the twitter-inputformat under flink-contrib.
> It provides no interesting properties in terms of accessing tweets (since it 
> just reads them from a file) in contrast to the streaming {{TwitterSource}}, 
> nor provides any significant functionality that cannot be achieved using the 
> jackson databind API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3984: [FLINK-6710] Remove Twitter-InputFormat

2017-05-24 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3984
  
sure. no problem.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-6691) Add checkstyle import block rule for scala imports

2017-05-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-6691.
---
Resolution: Fixed

1.4: bf92055afb4f76299caa8496dfe0fbdeb3305d78

> Add checkstyle import block rule for scala imports
> --
>
> Key: FLINK-6691
> URL: https://issues.apache.org/jira/browse/FLINK-6691
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> Similar to java and javax imports we should give scala imports a separate 
> import block.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023576#comment-16023576
 ] 

ASF GitHub Bot commented on FLINK-6687:
---

Github user zentol closed the pull request at:

https://github.com/apache/flink/pull/3973


> Activate strict checkstyle for flink-runtime-web
> 
>
> Key: FLINK-6687
> URL: https://issues.apache.org/jira/browse/FLINK-6687
> Project: Flink
>  Issue Type: Sub-task
>  Components: Webfrontend
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-6687.
---
Resolution: Fixed

1.4: d481f295005461b31c952e9f1d45aa175d803b01

> Activate strict checkstyle for flink-runtime-web
> 
>
> Key: FLINK-6687
> URL: https://issues.apache.org/jira/browse/FLINK-6687
> Project: Flink
>  Issue Type: Sub-task
>  Components: Webfrontend
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3973: [FLINK-6687] [web] Activate strict checkstyle for ...

2017-05-24 Thread zentol
Github user zentol closed the pull request at:

https://github.com/apache/flink/pull/3973


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6710) Remove twitter-inputformat

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023571#comment-16023571
 ] 

ASF GitHub Bot commented on FLINK-6710:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3984
  
+1 to remove it (however, please merge this only after at least one other 
PMC member also +1ed and after some time passed. I don't want to cause any 
discussions afterwards)


> Remove twitter-inputformat
> --
>
> Key: FLINK-6710
> URL: https://issues.apache.org/jira/browse/FLINK-6710
> Project: Flink
>  Issue Type: Improvement
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.4.0
>
>
> I propose removing the twitter-inputformat under flink-contrib.
> It provides no interesting properties in terms of accessing tweets (since it 
> just reads them from a file) in contrast to the streaming {{TwitterSource}}, 
> nor provides any significant functionality that cannot be achieved using the 
> jackson databind API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3984: [FLINK-6710] Remove Twitter-InputFormat

2017-05-24 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3984
  
+1 to remove it (however, please merge this only after at least one other 
PMC member also +1ed and after some time passed. I don't want to cause any 
discussions afterwards)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (FLINK-6692) The flink-dist jar contains unshaded netty jar

2017-05-24 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023552#comment-16023552
 ] 

Robert Metzger edited comment on FLINK-6692 at 5/24/17 7:53 PM:


Did you check if the classes are present in Flink 1.2.0 ? (IIRC, we shade 
Hadoop's Netty dependency. With our own netties (one for Akka, one for our 
network stack) I'm not sure)
Maybe its an issue introduced by the recent maven / shading changes.


was (Author: rmetzger):
Did you check if the classes are present in Flink 1.2.0 ?
Maybe its an issue introduced by the recent maven / shading changes.

> The flink-dist jar contains unshaded netty jar
> --
>
> Key: FLINK-6692
> URL: https://issues.apache.org/jira/browse/FLINK-6692
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 1.3.0
>
>
> The {{flink-dist}} jar contains unshaded netty 3 and netty 4 classes:
> {noformat}
> io/netty/handler/codec/http/router/
> io/netty/handler/codec/http/router/BadClientSilencer.class
> io/netty/handler/codec/http/router/MethodRouted.class
> io/netty/handler/codec/http/router/Handler.class
> io/netty/handler/codec/http/router/Router.class
> io/netty/handler/codec/http/router/DualMethodRouter.class
> io/netty/handler/codec/http/router/Routed.class
> io/netty/handler/codec/http/router/AbstractHandler.class
> io/netty/handler/codec/http/router/KeepAliveWrite.class
> io/netty/handler/codec/http/router/DualAbstractHandler.class
> io/netty/handler/codec/http/router/MethodRouter.class
> {noformat}
> {noformat}
> org/jboss/netty/util/internal/jzlib/InfBlocks.class
> org/jboss/netty/util/internal/jzlib/InfCodes.class
> org/jboss/netty/util/internal/jzlib/InfTree.class
> org/jboss/netty/util/internal/jzlib/Inflate$1.class
> org/jboss/netty/util/internal/jzlib/Inflate.class
> org/jboss/netty/util/internal/jzlib/JZlib$WrapperType.class
> org/jboss/netty/util/internal/jzlib/JZlib.class
> org/jboss/netty/util/internal/jzlib/StaticTree.class
> org/jboss/netty/util/internal/jzlib/Tree.class
> org/jboss/netty/util/internal/jzlib/ZStream$1.class
> org/jboss/netty/util/internal/jzlib/ZStream.class
> {noformat}
> Is it an expected behavior?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6692) The flink-dist jar contains unshaded netty jar

2017-05-24 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023552#comment-16023552
 ] 

Robert Metzger commented on FLINK-6692:
---

Did you check if the classes are present in Flink 1.2.0 ?
Maybe its an issue introduced by the recent maven / shading changes.

> The flink-dist jar contains unshaded netty jar
> --
>
> Key: FLINK-6692
> URL: https://issues.apache.org/jira/browse/FLINK-6692
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 1.3.0
>
>
> The {{flink-dist}} jar contains unshaded netty 3 and netty 4 classes:
> {noformat}
> io/netty/handler/codec/http/router/
> io/netty/handler/codec/http/router/BadClientSilencer.class
> io/netty/handler/codec/http/router/MethodRouted.class
> io/netty/handler/codec/http/router/Handler.class
> io/netty/handler/codec/http/router/Router.class
> io/netty/handler/codec/http/router/DualMethodRouter.class
> io/netty/handler/codec/http/router/Routed.class
> io/netty/handler/codec/http/router/AbstractHandler.class
> io/netty/handler/codec/http/router/KeepAliveWrite.class
> io/netty/handler/codec/http/router/DualAbstractHandler.class
> io/netty/handler/codec/http/router/MethodRouter.class
> {noformat}
> {noformat}
> org/jboss/netty/util/internal/jzlib/InfBlocks.class
> org/jboss/netty/util/internal/jzlib/InfCodes.class
> org/jboss/netty/util/internal/jzlib/InfTree.class
> org/jboss/netty/util/internal/jzlib/Inflate$1.class
> org/jboss/netty/util/internal/jzlib/Inflate.class
> org/jboss/netty/util/internal/jzlib/JZlib$WrapperType.class
> org/jboss/netty/util/internal/jzlib/JZlib.class
> org/jboss/netty/util/internal/jzlib/StaticTree.class
> org/jboss/netty/util/internal/jzlib/Tree.class
> org/jboss/netty/util/internal/jzlib/ZStream$1.class
> org/jboss/netty/util/internal/jzlib/ZStream.class
> {noformat}
> Is it an expected behavior?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6569) flink-table KafkaJsonTableSource example doesn't work

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023549#comment-16023549
 ] 

ASF GitHub Bot commented on FLINK-6569:
---

Github user haohui closed the pull request at:

https://github.com/apache/flink/pull/3890


> flink-table KafkaJsonTableSource example doesn't work
> -
>
> Key: FLINK-6569
> URL: https://issues.apache.org/jira/browse/FLINK-6569
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Robert Metzger
>Assignee: Haohui Mai
> Fix For: 1.3.0
>
>
> The code example uses 
> {code}
> TypeInformation typeInfo = Types.ROW(
>   new String[] { "id", "name", "score" },
>   new TypeInformation[] { Types.INT(), Types.STRING(), Types.DOUBLE() }
> );
> {code}
> the correct way of using it is something like
> {code}
> TypeInformation typeInfo = Types.ROW_NAMED(
> new String[] { "id", "zip", "date" },
> Types.LONG, Types.INT, Types.SQL_DATE);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6688) Activate strict checkstyle in flink-test-utils

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023550#comment-16023550
 ] 

ASF GitHub Bot commented on FLINK-6688:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3983
  
+1 if Travis


> Activate strict checkstyle in flink-test-utils
> --
>
> Key: FLINK-6688
> URL: https://issues.apache.org/jira/browse/FLINK-6688
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3983: [FLINK-6688] Activate strict checkstyle for flink-test-ut...

2017-05-24 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3983
  
+1 if Travis


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6710) Remove twitter-inputformat

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023548#comment-16023548
 ] 

ASF GitHub Bot commented on FLINK-6710:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/3984

[FLINK-6710] Remove Twitter-InputFormat

This PR removes the twitter InputFormat as it doesn't provide any 
significant functionality.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6710_dead_bird

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3984.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3984


commit da74f9173d4280785a62abed84ee756963049602
Author: zentol 
Date:   2017-05-24T19:48:31Z

[FLINK-6710] Remove Twitter-InputFormat




> Remove twitter-inputformat
> --
>
> Key: FLINK-6710
> URL: https://issues.apache.org/jira/browse/FLINK-6710
> Project: Flink
>  Issue Type: Improvement
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.4.0
>
>
> I propose removing the twitter-inputformat under flink-contrib.
> It provides no interesting properties in terms of accessing tweets (since it 
> just reads them from a file) in contrast to the streaming {{TwitterSource}}, 
> nor provides any significant functionality that cannot be achieved using the 
> jackson databind API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3890: [FLINK-6569] flink-table KafkaJsonTableSource exam...

2017-05-24 Thread haohui
Github user haohui closed the pull request at:

https://github.com/apache/flink/pull/3890


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3984: [FLINK-6710] Remove Twitter-InputFormat

2017-05-24 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/3984

[FLINK-6710] Remove Twitter-InputFormat

This PR removes the twitter InputFormat as it doesn't provide any 
significant functionality.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6710_dead_bird

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3984.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3984


commit da74f9173d4280785a62abed84ee756963049602
Author: zentol 
Date:   2017-05-24T19:48:31Z

[FLINK-6710] Remove Twitter-InputFormat




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6710) Remove twitter-inputformat

2017-05-24 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6710:
---

 Summary: Remove twitter-inputformat
 Key: FLINK-6710
 URL: https://issues.apache.org/jira/browse/FLINK-6710
 Project: Flink
  Issue Type: Improvement
  Components: Batch Connectors and Input/Output Formats
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
Priority: Minor
 Fix For: 1.4.0


I propose removing the twitter-inputformat under flink-contrib.

It provides no interesting properties in terms of accessing tweets (since it 
just reads them from a file) in contrast to the streaming {{TwitterSource}}, 
nor provides any significant functionality that cannot be achieved using the 
jackson databind API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6698) Activate strict checkstyle

2017-05-24 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023154#comment-16023154
 ] 

Greg Hogan edited comment on FLINK-6698 at 5/24/17 7:18 PM:


Rebasing to master and running the strict checkstyle analysis across the whole 
project yields ~50,000 total violations. By module:
- runtime ~16,700
- core ~11,000
- java ~5700
- optimizer ~5400
- tests ~3500
- gelly/gelly-examples ~1800
- all others ~5900

Edit: updated totals to include test packages ... the IntelliJ inspection does 
not honor the "include test sources" but depends on the checkstyle preference 
"Only Java sources (including tests)".


was (Author: greghogan):
Rebasing to master and running the strict checkstyle analysis across the whole 
project yields ~27,000 total violations. By module:
- runtime ~8000
- core ~7300
- java ~3200
- optimizer ~2800
- examples-batch ~1000
- gelly/gelly-examples ~1000
- all others ~3700

> Activate strict checkstyle
> --
>
> Key: FLINK-6698
> URL: https://issues.apache.org/jira/browse/FLINK-6698
> Project: Flink
>  Issue Type: Improvement
>Reporter: Chesnay Schepler
>
> Umbrella issue for introducing the strict checkstyle, to keep track of which 
> modules are already covered.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6688) Activate strict checkstyle in flink-test-utils

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023471#comment-16023471
 ] 

ASF GitHub Bot commented on FLINK-6688:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/3983

[FLINK-6688] Activate strict checkstyle for flink-test-utils



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6688_checkstyle_tutils

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3983.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3983


commit 8510be59f858ec0b024a1a8b9890977d23f7782a
Author: zentol 
Date:   2017-05-23T19:11:38Z

[FLINK-6691][checkstyle] Add separate block for scala imports

commit 49b8c357508859ed5a92e2b10f9d3ac3ce3092b8
Author: zentol 
Date:   2017-05-24T19:10:33Z

[FLINK-6688] Activate strict checkstyle for flink-test-utils




> Activate strict checkstyle in flink-test-utils
> --
>
> Key: FLINK-6688
> URL: https://issues.apache.org/jira/browse/FLINK-6688
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3983: [FLINK-6688] Activate strict checkstyle for flink-...

2017-05-24 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/3983

[FLINK-6688] Activate strict checkstyle for flink-test-utils



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6688_checkstyle_tutils

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3983.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3983


commit 8510be59f858ec0b024a1a8b9890977d23f7782a
Author: zentol 
Date:   2017-05-23T19:11:38Z

[FLINK-6691][checkstyle] Add separate block for scala imports

commit 49b8c357508859ed5a92e2b10f9d3ac3ce3092b8
Author: zentol 
Date:   2017-05-24T19:10:33Z

[FLINK-6688] Activate strict checkstyle for flink-test-utils




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6709) Activate strict checkstyle for flink-gellies

2017-05-24 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-6709:
-

 Summary: Activate strict checkstyle for flink-gellies
 Key: FLINK-6709
 URL: https://issues.apache.org/jira/browse/FLINK-6709
 Project: Flink
  Issue Type: Sub-task
  Components: Gelly
Affects Versions: 1.4.0
Reporter: Greg Hogan
Assignee: Greg Hogan
Priority: Trivial
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6691) Add checkstyle import block rule for scala imports

2017-05-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-6691:

Issue Type: Sub-task  (was: Improvement)
Parent: FLINK-6698

> Add checkstyle import block rule for scala imports
> --
>
> Key: FLINK-6691
> URL: https://issues.apache.org/jira/browse/FLINK-6691
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> Similar to java and javax imports we should give scala imports a separate 
> import block.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6688) Activate strict checkstyle in flink-test-utils

2017-05-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-6688:

Issue Type: Sub-task  (was: Improvement)
Parent: FLINK-6698

> Activate strict checkstyle in flink-test-utils
> --
>
> Key: FLINK-6688
> URL: https://issues.apache.org/jira/browse/FLINK-6688
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6658) Use scala Collections in scala CEP API

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023339#comment-16023339
 ] 

ASF GitHub Bot commented on FLINK-6658:
---

Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/3963#discussion_r118322678
  
--- Diff: 
flink-libraries/flink-cep-scala/src/main/scala/org/apache/flink/cep/scala/conditions/Context.scala
 ---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.cep.scala.conditions
+
+import java.io.Serializable
+
+/**
+  * The context used when evaluating the {@link IterativeCondition 
condition}.
+  */
+trait Context[T] extends Serializable {
+  /**
+* @return An { @link Iterable} over the already accepted elements
+*for a given pattern. Elements are iterated in the 
order they were
+*inserted in the pattern.
+* @param name The name of the pattern.
+*/
+  def apply(name: String): Iterable[T]
+}
--- End diff --

Right I was also not really decided on that one. Of course you can change 
it.


> Use scala Collections in scala CEP API
> --
>
> Key: FLINK-6658
> URL: https://issues.apache.org/jira/browse/FLINK-6658
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.3.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3963: [FLINK-6658][cep] Use scala Collections in scala C...

2017-05-24 Thread dawidwys
Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/3963#discussion_r118322678
  
--- Diff: 
flink-libraries/flink-cep-scala/src/main/scala/org/apache/flink/cep/scala/conditions/Context.scala
 ---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.cep.scala.conditions
+
+import java.io.Serializable
+
+/**
+  * The context used when evaluating the {@link IterativeCondition 
condition}.
+  */
+trait Context[T] extends Serializable {
+  /**
+* @return An { @link Iterable} over the already accepted elements
+*for a given pattern. Elements are iterated in the 
order they were
+*inserted in the pattern.
+* @param name The name of the pattern.
+*/
+  def apply(name: String): Iterable[T]
+}
--- End diff --

Right I was also not really decided on that one. Of course you can change 
it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6698) Activate strict checkstyle

2017-05-24 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023317#comment-16023317
 ] 

Chesnay Schepler commented on FLINK-6698:
-

>From what I've observer so far the majority of violations are trailing tabs, 
>trailing spaces in javadocs, wrong use/missing of {{}}, imports and missing 
>javadocs for public classes.

In other words, a lot of things that we can change without really affecting the 
code history.

> Activate strict checkstyle
> --
>
> Key: FLINK-6698
> URL: https://issues.apache.org/jira/browse/FLINK-6698
> Project: Flink
>  Issue Type: Improvement
>Reporter: Chesnay Schepler
>
> Umbrella issue for introducing the strict checkstyle, to keep track of which 
> modules are already covered.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6646) YARN session doesn't work with HA

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023200#comment-16023200
 ] 

ASF GitHub Bot commented on FLINK-6646:
---

Github user tedyu commented on a diff in the pull request:

https://github.com/apache/flink/pull/3981#discussion_r118304278
  
--- Diff: 
flink-yarn/src/main/scala/org/apache/flink/yarn/YarnJobManager.scala ---
@@ -89,5 +92,37 @@ class YarnJobManager(
   
flinkConfiguration.getInteger(ConfigConstants.YARN_HEARTBEAT_DELAY_SECONDS, 5),
   TimeUnit.SECONDS)
 
+  val yarnFilesPath: Option[String] = 
Option(System.getenv().get(YarnConfigKeys.FLINK_YARN_FILES))
+
   override val jobPollingInterval = YARN_HEARTBEAT_DELAY
+
+  override def handleMessage: Receive = {
+handleYarnShutdown orElse super.handleMessage
+  }
+
+  def handleYarnShutdown: Receive = {
+case msg:StopCluster =>
+  super.handleMessage(msg)
+
+  // do global cleanup if the yarn files path has been set
+  yarnFilesPath match {
+case Some(filePath) =>
+  log.info(s"Deleting yarn application files under $filePath.")
+
+  val path = new Path(filePath)
+
+  try {
+val fs = path.getFileSystem
+fs.delete(path, true)
--- End diff --

Please check the return value from delete() call


> YARN session doesn't work with HA
> -
>
> Key: FLINK-6646
> URL: https://issues.apache.org/jira/browse/FLINK-6646
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Robert Metzger
>Assignee: Till Rohrmann
>Priority: Blocker
>
> While testing Flink 1.3.0 RC1, I ran into the following issue on the 
> JobManager.
> {code}
> 2017-05-19 14:41:38,030 INFO  
> org.apache.flink.runtime.webmonitor.JobManagerRetriever   - New leader 
> reachable under 
> akka.tcp://flink@permanent-qa-cluster-i7c9.c.astral-sorter-757.internal:36528/user/jobmanager:6539dc04-d7fe-4f85-a0b6-09bfb0de8a58.
> 2017-05-19 14:41:38,033 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Resource Manager associating with leading JobManager 
> Actor[akka://flink/user/jobmanager#1602741108] - leader session 
> 6539dc04-d7fe-4f85-a0b6-09bfb0de8a58
> 2017-05-19 14:41:38,033 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Requesting new TaskManager container with 1024 megabytes 
> memory. Pending requests: 1
> 2017-05-19 14:41:38,781 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Received new container: 
> container_149487096_0061_02_02 - Remaining pending container 
> requests: 0
> 2017-05-19 14:41:38,782 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Launching TaskManager in container ContainerInLaunch @ 
> 1495204898782: Container: [ContainerId: 
> container_149487096_0061_02_02, NodeId: 
> permanent-qa-cluster-d3iz.c.astral-sorter-757.internal:8041, NodeHttpAddress: 
> permanent-qa-cluster-d3iz.c.astral-sorter-757.internal:8042, Resource: 
> , Priority: 0, Token: Token { kind: ContainerToken, 
> service: 10.240.0.32:8041 }, ] on host 
> permanent-qa-cluster-d3iz.c.astral-sorter-757.internal
> 2017-05-19 14:41:38,788 INFO  
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - 
> Opening proxy : permanent-qa-cluster-d3iz.c.astral-sorter-757.internal:8041
> 2017-05-19 14:41:44,284 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Container container_149487096_0061_02_02 failed, with 
> a TaskManager in launch or registration. Exit status: -1000
> 2017-05-19 14:41:44,284 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Diagnostics for container 
> container_149487096_0061_02_02 in state COMPLETE : exitStatus=-1000 
> diagnostics=File does not exist: 
> hdfs://nameservice1/user/robert/.flink/application_149487096_0061/cf9287fe-ac75-4066-a648-91787d946890-taskmanager-conf.yaml
> java.io.FileNotFoundException: File does not exist: 
> hdfs://nameservice1/user/robert/.flink/application_149487096_0061/cf9287fe-ac75-4066-a648-91787d946890-taskmanager-conf.yaml
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
>   at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
>   at 

[GitHub] flink pull request #3981: [FLINK-6646] [yarn] Let YarnJobManager delete Yarn...

2017-05-24 Thread tedyu
Github user tedyu commented on a diff in the pull request:

https://github.com/apache/flink/pull/3981#discussion_r118304278
  
--- Diff: 
flink-yarn/src/main/scala/org/apache/flink/yarn/YarnJobManager.scala ---
@@ -89,5 +92,37 @@ class YarnJobManager(
   
flinkConfiguration.getInteger(ConfigConstants.YARN_HEARTBEAT_DELAY_SECONDS, 5),
   TimeUnit.SECONDS)
 
+  val yarnFilesPath: Option[String] = 
Option(System.getenv().get(YarnConfigKeys.FLINK_YARN_FILES))
+
   override val jobPollingInterval = YARN_HEARTBEAT_DELAY
+
+  override def handleMessage: Receive = {
+handleYarnShutdown orElse super.handleMessage
+  }
+
+  def handleYarnShutdown: Receive = {
+case msg:StopCluster =>
+  super.handleMessage(msg)
+
+  // do global cleanup if the yarn files path has been set
+  yarnFilesPath match {
+case Some(filePath) =>
+  log.info(s"Deleting yarn application files under $filePath.")
+
+  val path = new Path(filePath)
+
+  try {
+val fs = path.getFileSystem
+fs.delete(path, true)
--- End diff --

Please check the return value from delete() call


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6708) Don't let the FlinkYarnSessionCli fail if it cannot retrieve the ClusterStatus

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023176#comment-16023176
 ] 

ASF GitHub Bot commented on FLINK-6708:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/3982

[FLINK-6708] [yarn] Harden FlinkYarnSessionCli to handle 
GetClusterStatusResponse exceptions

This PR is based on #3981.

This PR hardens the FlinkYarnSessionCli by handling exceptions which occur 
when
retrieving the GetClusterStatusResponse. If no such response is retrieved 
and instead
an exception is thrown, the Cli won't fail but retry it the next time.

cc @rmetzger.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink hardenYarnSession

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3982.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3982


commit 72ce39a1752cc19669f003b70cc2708852a06ac5
Author: Till Rohrmann 
Date:   2017-05-24T15:59:51Z

[FLINK-6646] [yarn] Let YarnJobManager delete Yarn application files

Before the YarnClusterClient decided when to delete the Yarn application 
files.
This is problematic because the client does not know whether a Yarn 
application
is being restarted or terminated. Due to this the files where always 
deleted. This
prevents Yarn from restarting a failed ApplicationMaster, effectively 
thwarting
Flink's HA capabilities.

The PR changes the behaviour such that the YarnJobManager deletes the Yarn 
files
if it receives a StopCluster message. That way, we can be sure that the 
yarn files
are deleted only iff the cluster is intended to be shut down.

commit 9227539f97e6dbc77c5367b8c555b4ba0b2ad06d
Author: Till Rohrmann 
Date:   2017-05-24T16:26:57Z

[FLINK-6708] [yarn] Harden FlinkYarnSessionCli to handle 
GetClusterStatusResponse exceptions

This PR hardens the FlinkYarnSessionCli by handling exceptions which occur 
when
retrieving the GetClusterStatusResponse. If no such response is retrieved 
and instead
an exception is thrown, the Cli won't fail but retry it the next time.




> Don't let the FlinkYarnSessionCli fail if it cannot retrieve the ClusterStatus
> --
>
> Key: FLINK-6708
> URL: https://issues.apache.org/jira/browse/FLINK-6708
> Project: Flink
>  Issue Type: Improvement
>  Components: YARN
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>
> The {{FlinkYarnSessionCli}} should not fail if it cannot retrieve the 
> {{GetClusterStatusResponse}}. This would harden Flink's Yarn session.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3982: [FLINK-6708] [yarn] Harden FlinkYarnSessionCli to ...

2017-05-24 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/3982

[FLINK-6708] [yarn] Harden FlinkYarnSessionCli to handle 
GetClusterStatusResponse exceptions

This PR is based on #3981.

This PR hardens the FlinkYarnSessionCli by handling exceptions which occur 
when
retrieving the GetClusterStatusResponse. If no such response is retrieved 
and instead
an exception is thrown, the Cli won't fail but retry it the next time.

cc @rmetzger.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink hardenYarnSession

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3982.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3982


commit 72ce39a1752cc19669f003b70cc2708852a06ac5
Author: Till Rohrmann 
Date:   2017-05-24T15:59:51Z

[FLINK-6646] [yarn] Let YarnJobManager delete Yarn application files

Before the YarnClusterClient decided when to delete the Yarn application 
files.
This is problematic because the client does not know whether a Yarn 
application
is being restarted or terminated. Due to this the files where always 
deleted. This
prevents Yarn from restarting a failed ApplicationMaster, effectively 
thwarting
Flink's HA capabilities.

The PR changes the behaviour such that the YarnJobManager deletes the Yarn 
files
if it receives a StopCluster message. That way, we can be sure that the 
yarn files
are deleted only iff the cluster is intended to be shut down.

commit 9227539f97e6dbc77c5367b8c555b4ba0b2ad06d
Author: Till Rohrmann 
Date:   2017-05-24T16:26:57Z

[FLINK-6708] [yarn] Harden FlinkYarnSessionCli to handle 
GetClusterStatusResponse exceptions

This PR hardens the FlinkYarnSessionCli by handling exceptions which occur 
when
retrieving the GetClusterStatusResponse. If no such response is retrieved 
and instead
an exception is thrown, the Cli won't fail but retry it the next time.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6708) Don't let the FlinkYarnSessionCli fail if it cannot retrieve the ClusterStatus

2017-05-24 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-6708:


 Summary: Don't let the FlinkYarnSessionCli fail if it cannot 
retrieve the ClusterStatus
 Key: FLINK-6708
 URL: https://issues.apache.org/jira/browse/FLINK-6708
 Project: Flink
  Issue Type: Improvement
  Components: YARN
Affects Versions: 1.3.0, 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
Priority: Minor


The {{FlinkYarnSessionCli}} should not fail if it cannot retrieve the 
{{GetClusterStatusResponse}}. This would harden Flink's Yarn session.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6707) Activate strict checkstyle for flink-examples

2017-05-24 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-6707:
-

 Summary: Activate strict checkstyle for flink-examples
 Key: FLINK-6707
 URL: https://issues.apache.org/jira/browse/FLINK-6707
 Project: Flink
  Issue Type: Sub-task
  Components: Examples
Affects Versions: 1.4.0
Reporter: Greg Hogan
Assignee: Greg Hogan
Priority: Trivial
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6698) Activate strict checkstyle

2017-05-24 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023154#comment-16023154
 ] 

Greg Hogan commented on FLINK-6698:
---

Rebasing to master and running the strict checkstyle analysis across the whole 
project yields ~27,000 total violations. By module:
- runtime ~8000
- core ~7300
- java ~3200
- optimizer ~2800
- examples-batch ~1000
- gelly/gelly-examples ~1000
- all others ~3700

> Activate strict checkstyle
> --
>
> Key: FLINK-6698
> URL: https://issues.apache.org/jira/browse/FLINK-6698
> Project: Flink
>  Issue Type: Improvement
>Reporter: Chesnay Schepler
>
> Umbrella issue for introducing the strict checkstyle, to keep track of which 
> modules are already covered.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6658) Use scala Collections in scala CEP API

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023152#comment-16023152
 ] 

ASF GitHub Bot commented on FLINK-6658:
---

Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/3963#discussion_r118297586
  
--- Diff: 
flink-libraries/flink-cep-scala/src/main/scala/org/apache/flink/cep/scala/conditions/Context.scala
 ---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.cep.scala.conditions
+
+import java.io.Serializable
+
+/**
+  * The context used when evaluating the {@link IterativeCondition 
condition}.
+  */
+trait Context[T] extends Serializable {
+  /**
+* @return An { @link Iterable} over the already accepted elements
+*for a given pattern. Elements are iterated in the 
order they were
+*inserted in the pattern.
+* @param name The name of the pattern.
+*/
+  def apply(name: String): Iterable[T]
+}
--- End diff --

Why having different method names between Java and Scala? In Java this 
method is called `getEventsForPattern()`. We could rename both to sth like 
`getEventsForState()` if you agree, but I would suggest that same methods 
should have the same name and also, specifically for this, writing 
`context(stateName)` does not really seem nice. If you agree I can change it 
and then merge. 


> Use scala Collections in scala CEP API
> --
>
> Key: FLINK-6658
> URL: https://issues.apache.org/jira/browse/FLINK-6658
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.3.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3963: [FLINK-6658][cep] Use scala Collections in scala C...

2017-05-24 Thread kl0u
Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/3963#discussion_r118297586
  
--- Diff: 
flink-libraries/flink-cep-scala/src/main/scala/org/apache/flink/cep/scala/conditions/Context.scala
 ---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.cep.scala.conditions
+
+import java.io.Serializable
+
+/**
+  * The context used when evaluating the {@link IterativeCondition 
condition}.
+  */
+trait Context[T] extends Serializable {
+  /**
+* @return An { @link Iterable} over the already accepted elements
+*for a given pattern. Elements are iterated in the 
order they were
+*inserted in the pattern.
+* @param name The name of the pattern.
+*/
+  def apply(name: String): Iterable[T]
+}
--- End diff --

Why having different method names between Java and Scala? In Java this 
method is called `getEventsForPattern()`. We could rename both to sth like 
`getEventsForState()` if you agree, but I would suggest that same methods 
should have the same name and also, specifically for this, writing 
`context(stateName)` does not really seem nice. If you agree I can change it 
and then merge. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6646) YARN session doesn't work with HA

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023151#comment-16023151
 ] 

ASF GitHub Bot commented on FLINK-6646:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/3981

[FLINK-6646] [yarn] Let YarnJobManager delete Yarn application files

Before the YarnClusterClient decided when to delete the Yarn application 
files.
This is problematic because the client does not know whether a Yarn 
application
is being restarted or terminated. Due to this the files where always 
deleted. This
prevents Yarn from restarting a failed ApplicationMaster, effectively 
thwarting
Flink's HA capabilities.

The PR changes the behaviour such that the YarnJobManager deletes the Yarn 
files
if it receives a StopCluster message. That way, we can be sure that the 
yarn files
are deleted only iff the cluster is intended to be shut down.

cc @rmetzger 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink fixYarnSession

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3981.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3981


commit 72ce39a1752cc19669f003b70cc2708852a06ac5
Author: Till Rohrmann 
Date:   2017-05-24T15:59:51Z

[FLINK-6646] [yarn] Let YarnJobManager delete Yarn application files

Before the YarnClusterClient decided when to delete the Yarn application 
files.
This is problematic because the client does not know whether a Yarn 
application
is being restarted or terminated. Due to this the files where always 
deleted. This
prevents Yarn from restarting a failed ApplicationMaster, effectively 
thwarting
Flink's HA capabilities.

The PR changes the behaviour such that the YarnJobManager deletes the Yarn 
files
if it receives a StopCluster message. That way, we can be sure that the 
yarn files
are deleted only iff the cluster is intended to be shut down.




> YARN session doesn't work with HA
> -
>
> Key: FLINK-6646
> URL: https://issues.apache.org/jira/browse/FLINK-6646
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Robert Metzger
>Assignee: Till Rohrmann
>Priority: Blocker
>
> While testing Flink 1.3.0 RC1, I ran into the following issue on the 
> JobManager.
> {code}
> 2017-05-19 14:41:38,030 INFO  
> org.apache.flink.runtime.webmonitor.JobManagerRetriever   - New leader 
> reachable under 
> akka.tcp://flink@permanent-qa-cluster-i7c9.c.astral-sorter-757.internal:36528/user/jobmanager:6539dc04-d7fe-4f85-a0b6-09bfb0de8a58.
> 2017-05-19 14:41:38,033 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Resource Manager associating with leading JobManager 
> Actor[akka://flink/user/jobmanager#1602741108] - leader session 
> 6539dc04-d7fe-4f85-a0b6-09bfb0de8a58
> 2017-05-19 14:41:38,033 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Requesting new TaskManager container with 1024 megabytes 
> memory. Pending requests: 1
> 2017-05-19 14:41:38,781 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Received new container: 
> container_149487096_0061_02_02 - Remaining pending container 
> requests: 0
> 2017-05-19 14:41:38,782 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Launching TaskManager in container ContainerInLaunch @ 
> 1495204898782: Container: [ContainerId: 
> container_149487096_0061_02_02, NodeId: 
> permanent-qa-cluster-d3iz.c.astral-sorter-757.internal:8041, NodeHttpAddress: 
> permanent-qa-cluster-d3iz.c.astral-sorter-757.internal:8042, Resource: 
> , Priority: 0, Token: Token { kind: ContainerToken, 
> service: 10.240.0.32:8041 }, ] on host 
> permanent-qa-cluster-d3iz.c.astral-sorter-757.internal
> 2017-05-19 14:41:38,788 INFO  
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - 
> Opening proxy : permanent-qa-cluster-d3iz.c.astral-sorter-757.internal:8041
> 2017-05-19 14:41:44,284 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Container container_149487096_0061_02_02 failed, with 
> a TaskManager in launch or registration. Exit status: -1000
> 2017-05-19 14:41:44,284 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Diagnostics for container 
> container_149487096_0061_02_02 in state COMPLETE : exitStatus=-1000 
> diagnostics=File does not exist: 
> 

[GitHub] flink pull request #3981: [FLINK-6646] [yarn] Let YarnJobManager delete Yarn...

2017-05-24 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/3981

[FLINK-6646] [yarn] Let YarnJobManager delete Yarn application files

Before the YarnClusterClient decided when to delete the Yarn application 
files.
This is problematic because the client does not know whether a Yarn 
application
is being restarted or terminated. Due to this the files where always 
deleted. This
prevents Yarn from restarting a failed ApplicationMaster, effectively 
thwarting
Flink's HA capabilities.

The PR changes the behaviour such that the YarnJobManager deletes the Yarn 
files
if it receives a StopCluster message. That way, we can be sure that the 
yarn files
are deleted only iff the cluster is intended to be shut down.

cc @rmetzger 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink fixYarnSession

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3981.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3981


commit 72ce39a1752cc19669f003b70cc2708852a06ac5
Author: Till Rohrmann 
Date:   2017-05-24T15:59:51Z

[FLINK-6646] [yarn] Let YarnJobManager delete Yarn application files

Before the YarnClusterClient decided when to delete the Yarn application 
files.
This is problematic because the client does not know whether a Yarn 
application
is being restarted or terminated. Due to this the files where always 
deleted. This
prevents Yarn from restarting a failed ApplicationMaster, effectively 
thwarting
Flink's HA capabilities.

The PR changes the behaviour such that the YarnJobManager deletes the Yarn 
files
if it receives a StopCluster message. That way, we can be sure that the 
yarn files
are deleted only iff the cluster is intended to be shut down.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-6594) Implement Flink Dispatcher for Kubernetes

2017-05-24 Thread Larry Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry Wu reassigned FLINK-6594:
---

Assignee: Larry Wu

> Implement Flink Dispatcher for Kubernetes
> -
>
> Key: FLINK-6594
> URL: https://issues.apache.org/jira/browse/FLINK-6594
> Project: Flink
>  Issue Type: New Feature
>  Components: Cluster Management
>Reporter: Larry Wu
>Assignee: Larry Wu
>  Labels: Kubernetes
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> This task is to implement Flink Dispatcher for Kubernetes, which is deployed 
> to Kubernetes cluster as a long-running pod. The Flink Dispatcher accepts job 
> submissions from Flink clients and asks Kubernetes API Server to create and 
> monitor a virtual cluster of Flink JobManager pod and Flink TaskManager Pods.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6685) SafetyNetCloseableRegistry is closed prematurely in Task::triggerCheckpointBarrier

2017-05-24 Thread Gyula Fora (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023140#comment-16023140
 ] 

Gyula Fora commented on FLINK-6685:
---

Thanks Stefan, awesome job! I can confirm that this solves our savepoint issues 
:)


> SafetyNetCloseableRegistry is closed prematurely in 
> Task::triggerCheckpointBarrier
> --
>
> Key: FLINK-6685
> URL: https://issues.apache.org/jira/browse/FLINK-6685
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Blocker
>
> The {{SafetyNetCloseableRegistry}} is closed to early in 
> {{triggerCheckpointBarrier(...)}}. Right now, it seems like the code assumes 
> that {{statefulTask.triggerCheckpoint(...)}} is blocking - which it is not. 
> Like this, the registry can be closed while the checkpoint is still running.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6646) YARN session doesn't work with HA

2017-05-24 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-6646:
-
Priority: Blocker  (was: Critical)

> YARN session doesn't work with HA
> -
>
> Key: FLINK-6646
> URL: https://issues.apache.org/jira/browse/FLINK-6646
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Robert Metzger
>Assignee: Till Rohrmann
>Priority: Blocker
>
> While testing Flink 1.3.0 RC1, I ran into the following issue on the 
> JobManager.
> {code}
> 2017-05-19 14:41:38,030 INFO  
> org.apache.flink.runtime.webmonitor.JobManagerRetriever   - New leader 
> reachable under 
> akka.tcp://flink@permanent-qa-cluster-i7c9.c.astral-sorter-757.internal:36528/user/jobmanager:6539dc04-d7fe-4f85-a0b6-09bfb0de8a58.
> 2017-05-19 14:41:38,033 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Resource Manager associating with leading JobManager 
> Actor[akka://flink/user/jobmanager#1602741108] - leader session 
> 6539dc04-d7fe-4f85-a0b6-09bfb0de8a58
> 2017-05-19 14:41:38,033 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Requesting new TaskManager container with 1024 megabytes 
> memory. Pending requests: 1
> 2017-05-19 14:41:38,781 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Received new container: 
> container_149487096_0061_02_02 - Remaining pending container 
> requests: 0
> 2017-05-19 14:41:38,782 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Launching TaskManager in container ContainerInLaunch @ 
> 1495204898782: Container: [ContainerId: 
> container_149487096_0061_02_02, NodeId: 
> permanent-qa-cluster-d3iz.c.astral-sorter-757.internal:8041, NodeHttpAddress: 
> permanent-qa-cluster-d3iz.c.astral-sorter-757.internal:8042, Resource: 
> , Priority: 0, Token: Token { kind: ContainerToken, 
> service: 10.240.0.32:8041 }, ] on host 
> permanent-qa-cluster-d3iz.c.astral-sorter-757.internal
> 2017-05-19 14:41:38,788 INFO  
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - 
> Opening proxy : permanent-qa-cluster-d3iz.c.astral-sorter-757.internal:8041
> 2017-05-19 14:41:44,284 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Container container_149487096_0061_02_02 failed, with 
> a TaskManager in launch or registration. Exit status: -1000
> 2017-05-19 14:41:44,284 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Diagnostics for container 
> container_149487096_0061_02_02 in state COMPLETE : exitStatus=-1000 
> diagnostics=File does not exist: 
> hdfs://nameservice1/user/robert/.flink/application_149487096_0061/cf9287fe-ac75-4066-a648-91787d946890-taskmanager-conf.yaml
> java.io.FileNotFoundException: File does not exist: 
> hdfs://nameservice1/user/robert/.flink/application_149487096_0061/cf9287fe-ac75-4066-a648-91787d946890-taskmanager-conf.yaml
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
>   at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
>   at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
>   at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
>   at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is the following:
> - JobManager1 starts from a yarn-session.sh
> - Job1 gets submitted to JobManager1
> - JobManager1 dies
> - YARN starts a new JM: JobManager2
> - in the meantime, errors on the 

[jira] [Commented] (FLINK-6646) YARN session doesn't work with HA

2017-05-24 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023123#comment-16023123
 ] 

Till Rohrmann commented on FLINK-6646:
--

Actually, I have to correct myself wrt to the severity of the problem. Given 
that the {{ClusterClient}} deletes crucial files which are needed to recover an 
{{ApplicationMaster}} failure which effectively thwarts Flink's HA 
capabilities, I would make this issue a blocker.

> YARN session doesn't work with HA
> -
>
> Key: FLINK-6646
> URL: https://issues.apache.org/jira/browse/FLINK-6646
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Robert Metzger
>Assignee: Till Rohrmann
>Priority: Critical
>
> While testing Flink 1.3.0 RC1, I ran into the following issue on the 
> JobManager.
> {code}
> 2017-05-19 14:41:38,030 INFO  
> org.apache.flink.runtime.webmonitor.JobManagerRetriever   - New leader 
> reachable under 
> akka.tcp://flink@permanent-qa-cluster-i7c9.c.astral-sorter-757.internal:36528/user/jobmanager:6539dc04-d7fe-4f85-a0b6-09bfb0de8a58.
> 2017-05-19 14:41:38,033 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Resource Manager associating with leading JobManager 
> Actor[akka://flink/user/jobmanager#1602741108] - leader session 
> 6539dc04-d7fe-4f85-a0b6-09bfb0de8a58
> 2017-05-19 14:41:38,033 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Requesting new TaskManager container with 1024 megabytes 
> memory. Pending requests: 1
> 2017-05-19 14:41:38,781 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Received new container: 
> container_149487096_0061_02_02 - Remaining pending container 
> requests: 0
> 2017-05-19 14:41:38,782 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Launching TaskManager in container ContainerInLaunch @ 
> 1495204898782: Container: [ContainerId: 
> container_149487096_0061_02_02, NodeId: 
> permanent-qa-cluster-d3iz.c.astral-sorter-757.internal:8041, NodeHttpAddress: 
> permanent-qa-cluster-d3iz.c.astral-sorter-757.internal:8042, Resource: 
> , Priority: 0, Token: Token { kind: ContainerToken, 
> service: 10.240.0.32:8041 }, ] on host 
> permanent-qa-cluster-d3iz.c.astral-sorter-757.internal
> 2017-05-19 14:41:38,788 INFO  
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - 
> Opening proxy : permanent-qa-cluster-d3iz.c.astral-sorter-757.internal:8041
> 2017-05-19 14:41:44,284 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Container container_149487096_0061_02_02 failed, with 
> a TaskManager in launch or registration. Exit status: -1000
> 2017-05-19 14:41:44,284 INFO  org.apache.flink.yarn.YarnFlinkResourceManager  
>   - Diagnostics for container 
> container_149487096_0061_02_02 in state COMPLETE : exitStatus=-1000 
> diagnostics=File does not exist: 
> hdfs://nameservice1/user/robert/.flink/application_149487096_0061/cf9287fe-ac75-4066-a648-91787d946890-taskmanager-conf.yaml
> java.io.FileNotFoundException: File does not exist: 
> hdfs://nameservice1/user/robert/.flink/application_149487096_0061/cf9287fe-ac75-4066-a648-91787d946890-taskmanager-conf.yaml
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
>   at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
>   at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
>   at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
>   at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> 

[jira] [Commented] (FLINK-6669) [Build] Scala style check errror on Windows

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023098#comment-16023098
 ] 

ASF GitHub Bot commented on FLINK-6669:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3967
  
@jinmingjian I've merged the change to master, could you close this PR?


> [Build] Scala style check errror on Windows
> ---
>
> Key: FLINK-6669
> URL: https://issues.apache.org/jira/browse/FLINK-6669
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
> Environment: Windows
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: FLINK-6669.patch
>
>
> When build the source code on Windows, a scala style check error happend.
> Here is the error messages.
> [INFO]
> [INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ flink-scala_2.10 
> ---
> error 
> file=E:\github\flink\flink-scala\src\main\scala\org\apache\flink\api\scala\utils\package.scala
>  message=Input length = 2
> Saving to outputFile=E:\github\flink\flink-scala\target\scalastyle-output.xml
> Processed 78 file(s)
> Found 1 errors
> Found 0 warnings
> Found 0 infos
> Finished in 1189 ms
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] force-shading .. SUCCESS [ 37.206 
> s]
> [INFO] flink .. SUCCESS [03:27 
> min]
> [INFO] flink-annotations .. SUCCESS [  3.020 
> s]
> [INFO] flink-shaded-hadoop  SUCCESS [  0.928 
> s]
> [INFO] flink-shaded-hadoop2 ... SUCCESS [ 15.314 
> s]
> [INFO] flink-shaded-hadoop2-uber .. SUCCESS [ 13.085 
> s]
> [INFO] flink-shaded-curator ... SUCCESS [  0.234 
> s]
> [INFO] flink-shaded-curator-recipes ... SUCCESS [  3.336 
> s]
> [INFO] flink-shaded-curator-test .. SUCCESS [  2.948 
> s]
> [INFO] flink-metrics .. SUCCESS [  0.286 
> s]
> [INFO] flink-metrics-core . SUCCESS [  9.065 
> s]
> [INFO] flink-test-utils-parent  SUCCESS [  0.327 
> s]
> [INFO] flink-test-utils-junit . SUCCESS [  1.452 
> s]
> [INFO] flink-core . SUCCESS [ 54.277 
> s]
> \[INFO\] flink-java . SUCCESS [ 
> 25.244 s]
> [INFO] flink-runtime .. SUCCESS [03:08 
> min]
> [INFO] flink-optimizer  SUCCESS [ 14.540 
> s]
> [INFO] flink-clients .. SUCCESS [ 14.457 
> s]
> [INFO] flink-streaming-java ... SUCCESS [ 58.130 
> s]
> [INFO] flink-test-utils ... SUCCESS [ 19.906 
> s]
> [INFO] flink-scala  FAILURE [ 56.634 
> s]
> [INFO] flink-runtime-web .. SKIPPED
> I think this is caused by the Windows default encoding. When I set the 
> inputEncoding to UTF-8 in scalastyle-maven-plugin, the error don't happen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3967: [FLINK-6669] Scala style check errror on Windows

2017-05-24 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3967
  
@jinmingjian I've merged the change to master, could you close this PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-6689) Remote StreamExecutionEnvironment fails to submit jobs against LocalFlinkMiniCluster

2017-05-24 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-6689:

Component/s: Client

> Remote StreamExecutionEnvironment fails to submit jobs against 
> LocalFlinkMiniCluster
> 
>
> Key: FLINK-6689
> URL: https://issues.apache.org/jira/browse/FLINK-6689
> Project: Flink
>  Issue Type: Bug
>  Components: Client, Job-Submission
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
> Fix For: 1.3.0
>
>
> The following Flink programs fails to execute with the current 1.3 branch 
> (1.2 works) because the leader session ID being used is wrong:
> {code:java}
> final String jobManagerAddress = "localhost";
> final int jobManagerPort = ConfigConstants.DEFAULT_JOB_MANAGER_IPC_PORT;
> final Configuration config = new Configuration();
>   config.setString(ConfigConstants.JOB_MANAGER_IPC_ADDRESS_KEY, 
> jobManagerAddress);
>   config.setInteger(ConfigConstants.JOB_MANAGER_IPC_PORT_KEY, 
> jobManagerPort);
>   config.setBoolean(ConfigConstants.LOCAL_START_WEBSERVER, true);
> final LocalFlinkMiniCluster cluster = new LocalFlinkMiniCluster(config, 
> false);
> cluster.start(true);
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.createRemoteEnvironment(jobManagerAddress, 
> jobManagerPort);
> env.fromElements(1l).addSink(new DiscardingSink());
> // fails due to leader session id being wrong:
> env.execute("test");
> {code}
> Output from logs contais:
> {code}
> ...
> 16:24:57,551 INFO  org.apache.flink.runtime.webmonitor.JobManagerRetriever
>- New leader reachable under 
> akka.tcp://flink@localhost:6123/user/jobmanager:ff0d56cf-6205-4dd4-a266-03847f4d6944.
> 16:24:57,894 INFO  
> org.apache.flink.streaming.api.environment.RemoteStreamEnvironment  - Running 
> remotely at localhost:6123
> 16:24:58,121 INFO  org.apache.flink.client.program.StandaloneClusterClient
>- Starting client actor system.
> 16:24:58,123 INFO  org.apache.flink.runtime.util.LeaderRetrievalUtils 
>- Trying to select the network interface and address to use by connecting 
> to the leading JobManager.
> 16:24:58,128 INFO  org.apache.flink.runtime.util.LeaderRetrievalUtils 
>- TaskManager will try to connect for 1 milliseconds before falling 
> back to heuristics
> 16:24:58,132 INFO  org.apache.flink.runtime.net.ConnectionUtils   
>- Retrieved new target address localhost/127.0.0.1:6123.
> 16:24:58,258 INFO  akka.event.slf4j.Slf4jLogger   
>- Slf4jLogger started
> 16:24:58,262 INFO  Remoting   
>- Starting remoting
> 16:24:58,375 INFO  Remoting   
>- Remoting started; listening on addresses 
> :[akka.tcp://fl...@nico-work.fritz.box:43413]
> 16:24:58,376 INFO  org.apache.flink.client.program.StandaloneClusterClient
>- Submitting job with JobID: 9bef4793a4b7f4caaad96bd28211cbb9. Waiting for 
> job completion.
> Submitting job with JobID: 9bef4793a4b7f4caaad96bd28211cbb9. Waiting for job 
> completion.
> 16:24:58,382 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Disconnect from JobManager null.
> 16:24:58,398 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Received SubmitJobAndWait(JobGraph(jobId: 
> 9bef4793a4b7f4caaad96bd28211cbb9)) but there is no connection to a JobManager 
> yet.
> 16:24:58,398 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Received job test (9bef4793a4b7f4caaad96bd28211cbb9).
> 16:24:58,429 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Connect to JobManager 
> Actor[akka.tcp://flink@localhost:6123/user/jobmanager#1881889998].
> 16:24:58,432 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Connected to JobManager at 
> Actor[akka.tcp://flink@localhost:6123/user/jobmanager#1881889998] with leader 
> session id ----.
> Connected to JobManager at 
> Actor[akka.tcp://flink@localhost:6123/user/jobmanager#1881889998] with leader 
> session id ----.
> 16:24:58,432 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Sending message to JobManager 
> akka.tcp://flink@localhost:6123/user/jobmanager to submit job test 
> (9bef4793a4b7f4caaad96bd28211cbb9) and wait for progress
> 16:24:58,433 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Upload jar files to job manager 
> akka.tcp://flink@localhost:6123/user/jobmanager.
> 16:24:58,440 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor   
>- Submit job to the job manager 
> 

[jira] [Commented] (FLINK-6515) KafkaConsumer checkpointing fails because of ClassLoader issues

2017-05-24 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023073#comment-16023073
 ] 

Aljoscha Krettek commented on FLINK-6515:
-

We should double check, this not working would definitely be a release blocker.

> KafkaConsumer checkpointing fails because of ClassLoader issues
> ---
>
> Key: FLINK-6515
> URL: https://issues.apache.org/jira/browse/FLINK-6515
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.3.0
>Reporter: Aljoscha Krettek
>Assignee: Stephan Ewen
>Priority: Blocker
> Fix For: 1.3.0, 1.4.0
>
>
> A job with Kafka and checkpointing enabled fails with:
> {code}
> java.lang.Exception: Error while triggering checkpoint 1 for Source: Custom 
> Source -> Map -> Sink: Unnamed (1/1)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1195)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.Exception: Could not perform checkpoint 1 for operator 
> Source: Custom Source -> Map -> Sink: Unnamed (1/1).
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:526)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.triggerCheckpoint(SourceStreamTask.java:112)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1184)
>   ... 5 more
> Caused by: java.lang.Exception: Could not complete snapshot 1 for operator 
> Source: Custom Source -> Map -> Sink: Unnamed (1/1).
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:407)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1157)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1089)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:653)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:589)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:520)
>   ... 7 more
> Caused by: java.lang.RuntimeException: Could not copy instance of 
> (KafkaTopicPartition{topic='test-input', partition=0},-1).
>   at 
> org.apache.flink.runtime.state.JavaSerializer.copy(JavaSerializer.java:54)
>   at 
> org.apache.flink.runtime.state.JavaSerializer.copy(JavaSerializer.java:32)
>   at 
> org.apache.flink.runtime.state.ArrayListSerializer.copy(ArrayListSerializer.java:71)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.(DefaultOperatorStateBackend.java:368)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.deepCopy(DefaultOperatorStateBackend.java:380)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.snapshot(DefaultOperatorStateBackend.java:191)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:392)
>   ... 12 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:64)
>   at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
>   at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at 

[jira] [Commented] (FLINK-6706) remove ChaosMonkeyITCase

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023058#comment-16023058
 ] 

ASF GitHub Bot commented on FLINK-6706:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3980
  
+1.


> remove ChaosMonkeyITCase
> 
>
> Key: FLINK-6706
> URL: https://issues.apache.org/jira/browse/FLINK-6706
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
>
> {{ChaosMonkeyITCase}} has been disabled since Dec 2015 and is probably 
> outdated. In its current form, it doesn't make sense to keep it and would 
> need a replacement if desired.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3980: [FLINK-6706] remove outdated/unused ChaosMonkeyITCase

2017-05-24 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3980
  
+1.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6494) Migrate ResourceManager configuration options

2017-05-24 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023038#comment-16023038
 ] 

Chesnay Schepler commented on FLINK-6494:
-

yes, the yarn and mesos options should be migrated as well.

> Migrate ResourceManager configuration options
> -
>
> Key: FLINK-6494
> URL: https://issues.apache.org/jira/browse/FLINK-6494
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination, ResourceManager
>Reporter: Chesnay Schepler
>Assignee: Fang Yong
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6706) remove ChaosMonkeyITCase

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023031#comment-16023031
 ] 

ASF GitHub Bot commented on FLINK-6706:
---

GitHub user NicoK opened a pull request:

https://github.com/apache/flink/pull/3980

[FLINK-6706] remove outdated/unused ChaosMonkeyITCase

This test was disabled in Dec 2015 due to its instability and never made it
back again. It is probably outdated and may not even work anymore due to the
changes since then.
Since it doesn't make sense to keep it in its current form, let's remove it.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/NicoK/flink flink-6706

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3980.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3980


commit cb2e591e588f9d5a22967a3cc563734dae7186ec
Author: Nico Kruber 
Date:   2017-05-24T14:54:11Z

[FLINK-6706] remove outdated/unused ChaosMonkeyITCase

This test was disabled in Dec 2015 due to its instability and never made it
back again. It is probably outdated and may not even work anymore due to the
changes since then.
Since it doesn't make sense to keep it in its current form, let's remove it.




> remove ChaosMonkeyITCase
> 
>
> Key: FLINK-6706
> URL: https://issues.apache.org/jira/browse/FLINK-6706
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
>
> {{ChaosMonkeyITCase}} has been disabled since Dec 2015 and is probably 
> outdated. In its current form, it doesn't make sense to keep it and would 
> need a replacement if desired.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3980: [FLINK-6706] remove outdated/unused ChaosMonkeyITC...

2017-05-24 Thread NicoK
GitHub user NicoK opened a pull request:

https://github.com/apache/flink/pull/3980

[FLINK-6706] remove outdated/unused ChaosMonkeyITCase

This test was disabled in Dec 2015 due to its instability and never made it
back again. It is probably outdated and may not even work anymore due to the
changes since then.
Since it doesn't make sense to keep it in its current form, let's remove it.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/NicoK/flink flink-6706

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3980.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3980


commit cb2e591e588f9d5a22967a3cc563734dae7186ec
Author: Nico Kruber 
Date:   2017-05-24T14:54:11Z

[FLINK-6706] remove outdated/unused ChaosMonkeyITCase

This test was disabled in Dec 2015 due to its instability and never made it
back again. It is probably outdated and may not even work anymore due to the
changes since then.
Since it doesn't make sense to keep it in its current form, let's remove it.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6706) remove ChaosMonkeyITCase

2017-05-24 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-6706:
--

 Summary: remove ChaosMonkeyITCase
 Key: FLINK-6706
 URL: https://issues.apache.org/jira/browse/FLINK-6706
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Affects Versions: 1.3.0
Reporter: Nico Kruber


{{ChaosMonkeyITCase}} has been disabled since Dec 2015 and is probably 
outdated. In its current form, it doesn't make sense to keep it and would need 
a replacement if desired.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (FLINK-6669) [Build] Scala style check errror on Windows

2017-05-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-6669:
-

adding it to 1.3 as well.

> [Build] Scala style check errror on Windows
> ---
>
> Key: FLINK-6669
> URL: https://issues.apache.org/jira/browse/FLINK-6669
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
> Environment: Windows
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: FLINK-6669.patch
>
>
> When build the source code on Windows, a scala style check error happend.
> Here is the error messages.
> [INFO]
> [INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ flink-scala_2.10 
> ---
> error 
> file=E:\github\flink\flink-scala\src\main\scala\org\apache\flink\api\scala\utils\package.scala
>  message=Input length = 2
> Saving to outputFile=E:\github\flink\flink-scala\target\scalastyle-output.xml
> Processed 78 file(s)
> Found 1 errors
> Found 0 warnings
> Found 0 infos
> Finished in 1189 ms
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] force-shading .. SUCCESS [ 37.206 
> s]
> [INFO] flink .. SUCCESS [03:27 
> min]
> [INFO] flink-annotations .. SUCCESS [  3.020 
> s]
> [INFO] flink-shaded-hadoop  SUCCESS [  0.928 
> s]
> [INFO] flink-shaded-hadoop2 ... SUCCESS [ 15.314 
> s]
> [INFO] flink-shaded-hadoop2-uber .. SUCCESS [ 13.085 
> s]
> [INFO] flink-shaded-curator ... SUCCESS [  0.234 
> s]
> [INFO] flink-shaded-curator-recipes ... SUCCESS [  3.336 
> s]
> [INFO] flink-shaded-curator-test .. SUCCESS [  2.948 
> s]
> [INFO] flink-metrics .. SUCCESS [  0.286 
> s]
> [INFO] flink-metrics-core . SUCCESS [  9.065 
> s]
> [INFO] flink-test-utils-parent  SUCCESS [  0.327 
> s]
> [INFO] flink-test-utils-junit . SUCCESS [  1.452 
> s]
> [INFO] flink-core . SUCCESS [ 54.277 
> s]
> \[INFO\] flink-java . SUCCESS [ 
> 25.244 s]
> [INFO] flink-runtime .. SUCCESS [03:08 
> min]
> [INFO] flink-optimizer  SUCCESS [ 14.540 
> s]
> [INFO] flink-clients .. SUCCESS [ 14.457 
> s]
> [INFO] flink-streaming-java ... SUCCESS [ 58.130 
> s]
> [INFO] flink-test-utils ... SUCCESS [ 19.906 
> s]
> [INFO] flink-scala  FAILURE [ 56.634 
> s]
> [INFO] flink-runtime-web .. SKIPPED
> I think this is caused by the Windows default encoding. When I set the 
> inputEncoding to UTF-8 in scalastyle-maven-plugin, the error don't happen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6431) Activate strict checkstyle for flink-metrics

2017-05-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-6431.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

1.4: ce573c65e2573ae12c8f6a76cc580445886a0a74

> Activate strict checkstyle for flink-metrics
> 
>
> Key: FLINK-6431
> URL: https://issues.apache.org/jira/browse/FLINK-6431
> Project: Flink
>  Issue Type: Sub-task
>  Components: Metrics
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6432) Activate strict checkstyle for flink-python

2017-05-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-6432.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

1.4: 04fae5362bdc933c111c92a9cc3b3b2c1d71850e

> Activate strict checkstyle for flink-python
> ---
>
> Key: FLINK-6432
> URL: https://issues.apache.org/jira/browse/FLINK-6432
> Project: Flink
>  Issue Type: Sub-task
>  Components: Python API
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6675) Activate strict checkstyle for flink-annotations

2017-05-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-6675.
---
Resolution: Fixed

1.4: dbcc456a652e980323b1b23692578e3c22e25e68

> Activate strict checkstyle for flink-annotations
> 
>
> Key: FLINK-6675
> URL: https://issues.apache.org/jira/browse/FLINK-6675
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (FLINK-6320) Flakey JobManagerHAJobGraphRecoveryITCase

2017-05-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-6320:
-

I'll also push this one to 1.3.

> Flakey JobManagerHAJobGraphRecoveryITCase
> -
>
> Key: FLINK-6320
> URL: https://issues.apache.org/jira/browse/FLINK-6320
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>  Labels: test-stability
> Fix For: 1.4.0
>
>
> it looks as if there is a race condition in the cleanup of 
> {{JobManagerHAJobGraphRecoveryITCase}}.
> {code}
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 50.271 sec 
> <<< FAILURE! - in 
> org.apache.flink.test.recovery.JobManagerHAJobGraphRecoveryITCase
> testJobPersistencyWhenJobManagerShutdown(org.apache.flink.test.recovery.JobManagerHAJobGraphRecoveryITCase)
>   Time elapsed: 0.129 sec  <<< ERROR!
> java.io.FileNotFoundException: File does not exist: 
> /tmp/9b63934b-789d-428c-aa9e-47d5d8fa1e32/recovery/submittedJobGraphf763d61fba47
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2275)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
>   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
>   at 
> org.apache.flink.test.recovery.JobManagerHAJobGraphRecoveryITCase.cleanUp(JobManagerHAJobGraphRecoveryITCase.java:112)
> {code}
> Full log: 
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/223124016/log.txt
> Maybe a rule-based temporary directory is a better solution:
> {code:java}
>   @Rule
>   public TemporaryFolder tempFolder = new TemporaryFolder();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-6320) Flakey JobManagerHAJobGraphRecoveryITCase

2017-05-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-6320:
---

Assignee: Nico Kruber

> Flakey JobManagerHAJobGraphRecoveryITCase
> -
>
> Key: FLINK-6320
> URL: https://issues.apache.org/jira/browse/FLINK-6320
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>  Labels: test-stability
> Fix For: 1.4.0
>
>
> it looks as if there is a race condition in the cleanup of 
> {{JobManagerHAJobGraphRecoveryITCase}}.
> {code}
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 50.271 sec 
> <<< FAILURE! - in 
> org.apache.flink.test.recovery.JobManagerHAJobGraphRecoveryITCase
> testJobPersistencyWhenJobManagerShutdown(org.apache.flink.test.recovery.JobManagerHAJobGraphRecoveryITCase)
>   Time elapsed: 0.129 sec  <<< ERROR!
> java.io.FileNotFoundException: File does not exist: 
> /tmp/9b63934b-789d-428c-aa9e-47d5d8fa1e32/recovery/submittedJobGraphf763d61fba47
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2275)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
>   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
>   at 
> org.apache.flink.test.recovery.JobManagerHAJobGraphRecoveryITCase.cleanUp(JobManagerHAJobGraphRecoveryITCase.java:112)
> {code}
> Full log: 
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/223124016/log.txt
> Maybe a rule-based temporary directory is a better solution:
> {code:java}
>   @Rule
>   public TemporaryFolder tempFolder = new TemporaryFolder();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6320) Flakey JobManagerHAJobGraphRecoveryITCase

2017-05-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-6320.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

1.4: 6ad3d140f35722055c9011dbee88d19319cfbfbe

> Flakey JobManagerHAJobGraphRecoveryITCase
> -
>
> Key: FLINK-6320
> URL: https://issues.apache.org/jira/browse/FLINK-6320
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>  Labels: test-stability
> Fix For: 1.4.0
>
>
> it looks as if there is a race condition in the cleanup of 
> {{JobManagerHAJobGraphRecoveryITCase}}.
> {code}
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 50.271 sec 
> <<< FAILURE! - in 
> org.apache.flink.test.recovery.JobManagerHAJobGraphRecoveryITCase
> testJobPersistencyWhenJobManagerShutdown(org.apache.flink.test.recovery.JobManagerHAJobGraphRecoveryITCase)
>   Time elapsed: 0.129 sec  <<< ERROR!
> java.io.FileNotFoundException: File does not exist: 
> /tmp/9b63934b-789d-428c-aa9e-47d5d8fa1e32/recovery/submittedJobGraphf763d61fba47
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2275)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
>   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
>   at 
> org.apache.flink.test.recovery.JobManagerHAJobGraphRecoveryITCase.cleanUp(JobManagerHAJobGraphRecoveryITCase.java:112)
> {code}
> Full log: 
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/223124016/log.txt
> Maybe a rule-based temporary directory is a better solution:
> {code:java}
>   @Rule
>   public TemporaryFolder tempFolder = new TemporaryFolder();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6432) Activate strict checkstyle for flink-python

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023002#comment-16023002
 ] 

ASF GitHub Bot commented on FLINK-6432:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3969


> Activate strict checkstyle for flink-python
> ---
>
> Key: FLINK-6432
> URL: https://issues.apache.org/jira/browse/FLINK-6432
> Project: Flink
>  Issue Type: Sub-task
>  Components: Python API
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6669) [Build] Scala style check errror on Windows

2017-05-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-6669.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

1.4: f827d730e40ac1c71fe974d4fd674e55ad530cdb

> [Build] Scala style check errror on Windows
> ---
>
> Key: FLINK-6669
> URL: https://issues.apache.org/jira/browse/FLINK-6669
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
> Environment: Windows
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: FLINK-6669.patch
>
>
> When build the source code on Windows, a scala style check error happend.
> Here is the error messages.
> [INFO]
> [INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ flink-scala_2.10 
> ---
> error 
> file=E:\github\flink\flink-scala\src\main\scala\org\apache\flink\api\scala\utils\package.scala
>  message=Input length = 2
> Saving to outputFile=E:\github\flink\flink-scala\target\scalastyle-output.xml
> Processed 78 file(s)
> Found 1 errors
> Found 0 warnings
> Found 0 infos
> Finished in 1189 ms
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] force-shading .. SUCCESS [ 37.206 
> s]
> [INFO] flink .. SUCCESS [03:27 
> min]
> [INFO] flink-annotations .. SUCCESS [  3.020 
> s]
> [INFO] flink-shaded-hadoop  SUCCESS [  0.928 
> s]
> [INFO] flink-shaded-hadoop2 ... SUCCESS [ 15.314 
> s]
> [INFO] flink-shaded-hadoop2-uber .. SUCCESS [ 13.085 
> s]
> [INFO] flink-shaded-curator ... SUCCESS [  0.234 
> s]
> [INFO] flink-shaded-curator-recipes ... SUCCESS [  3.336 
> s]
> [INFO] flink-shaded-curator-test .. SUCCESS [  2.948 
> s]
> [INFO] flink-metrics .. SUCCESS [  0.286 
> s]
> [INFO] flink-metrics-core . SUCCESS [  9.065 
> s]
> [INFO] flink-test-utils-parent  SUCCESS [  0.327 
> s]
> [INFO] flink-test-utils-junit . SUCCESS [  1.452 
> s]
> [INFO] flink-core . SUCCESS [ 54.277 
> s]
> \[INFO\] flink-java . SUCCESS [ 
> 25.244 s]
> [INFO] flink-runtime .. SUCCESS [03:08 
> min]
> [INFO] flink-optimizer  SUCCESS [ 14.540 
> s]
> [INFO] flink-clients .. SUCCESS [ 14.457 
> s]
> [INFO] flink-streaming-java ... SUCCESS [ 58.130 
> s]
> [INFO] flink-test-utils ... SUCCESS [ 19.906 
> s]
> [INFO] flink-scala  FAILURE [ 56.634 
> s]
> [INFO] flink-runtime-web .. SKIPPED
> I think this is caused by the Windows default encoding. When I set the 
> inputEncoding to UTF-8 in scalastyle-maven-plugin, the error don't happen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6320) Flakey JobManagerHAJobGraphRecoveryITCase

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023006#comment-16023006
 ] 

ASF GitHub Bot commented on FLINK-6320:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3966


> Flakey JobManagerHAJobGraphRecoveryITCase
> -
>
> Key: FLINK-6320
> URL: https://issues.apache.org/jira/browse/FLINK-6320
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
>  Labels: test-stability
>
> it looks as if there is a race condition in the cleanup of 
> {{JobManagerHAJobGraphRecoveryITCase}}.
> {code}
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 50.271 sec 
> <<< FAILURE! - in 
> org.apache.flink.test.recovery.JobManagerHAJobGraphRecoveryITCase
> testJobPersistencyWhenJobManagerShutdown(org.apache.flink.test.recovery.JobManagerHAJobGraphRecoveryITCase)
>   Time elapsed: 0.129 sec  <<< ERROR!
> java.io.FileNotFoundException: File does not exist: 
> /tmp/9b63934b-789d-428c-aa9e-47d5d8fa1e32/recovery/submittedJobGraphf763d61fba47
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2275)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
>   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
>   at 
> org.apache.flink.test.recovery.JobManagerHAJobGraphRecoveryITCase.cleanUp(JobManagerHAJobGraphRecoveryITCase.java:112)
> {code}
> Full log: 
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/223124016/log.txt
> Maybe a rule-based temporary directory is a better solution:
> {code:java}
>   @Rule
>   public TemporaryFolder tempFolder = new TemporaryFolder();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5376) Misleading log statements in UnorderedStreamElementQueue

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023007#comment-16023007
 ] 

ASF GitHub Bot commented on FLINK-5376:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3948


> Misleading log statements in UnorderedStreamElementQueue
> 
>
> Key: FLINK-5376
> URL: https://issues.apache.org/jira/browse/FLINK-5376
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Reporter: Ted Yu
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.4.0
>
>
> The following are two examples where ordered stream element queue is 
> mentioned:
> {code}
> LOG.debug("Put element into ordered stream element queue. New filling 
> degree " +
>   "({}/{}).", numberEntries, capacity);
> return true;
>   } else {
> LOG.debug("Failed to put element into ordered stream element queue 
> because it " +
> {code}
> I guess OrderedStreamElementQueue was coded first.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-5376) Misleading log statements in UnorderedStreamElementQueue

2017-05-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-5376.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

1.4: ed65c253478e688fdfbb149a1a75dc01b8537cab

> Misleading log statements in UnorderedStreamElementQueue
> 
>
> Key: FLINK-5376
> URL: https://issues.apache.org/jira/browse/FLINK-5376
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Reporter: Ted Yu
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.4.0
>
>
> The following are two examples where ordered stream element queue is 
> mentioned:
> {code}
> LOG.debug("Put element into ordered stream element queue. New filling 
> degree " +
>   "({}/{}).", numberEntries, capacity);
> return true;
>   } else {
> LOG.debug("Failed to put element into ordered stream element queue 
> because it " +
> {code}
> I guess OrderedStreamElementQueue was coded first.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6675) Activate strict checkstyle for flink-annotations

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023005#comment-16023005
 ] 

ASF GitHub Bot commented on FLINK-6675:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3970


> Activate strict checkstyle for flink-annotations
> 
>
> Key: FLINK-6675
> URL: https://issues.apache.org/jira/browse/FLINK-6675
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6659) RocksDBMergeIteratorTest, SavepointITCase leave temporary directories behind

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023004#comment-16023004
 ] 

ASF GitHub Bot commented on FLINK-6659:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3962


> RocksDBMergeIteratorTest, SavepointITCase leave temporary directories behind
> 
>
> Key: FLINK-6659
> URL: https://issues.apache.org/jira/browse/FLINK-6659
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.2.0, 1.3.0, 1.2.1, 1.2.2
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>
> {{RocksDBMergeIteratorTest}} uses a newly created temporary directory (via 
> {{CommonTestUtils.createTempDirectory()}}) for its RocksDB instance but does 
> not delete is when finished. We should better replace this pattern with a 
> proper {{@Rule}}-based approach



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6431) Activate strict checkstyle for flink-metrics

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16023003#comment-16023003
 ] 

ASF GitHub Bot commented on FLINK-6431:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3968


> Activate strict checkstyle for flink-metrics
> 
>
> Key: FLINK-6431
> URL: https://issues.apache.org/jira/browse/FLINK-6431
> Project: Flink
>  Issue Type: Sub-task
>  Components: Metrics
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3969: [FLINK-6432] [py] Activate strict checkstyle for f...

2017-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3969


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3970: [FLINK-6675] Activate strict checkstyle for flink-...

2017-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3970


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3966: [FLINK-6320] fix unit test failing sometimes when ...

2017-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3966


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3968: [FLINK-6431] [metrics] Activate strict checkstyle ...

2017-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3968


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3948: [FLINK-5376] Fix log statement in UnorderedStreamE...

2017-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3948


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3962: [FLINK-6659] fix (some) unit tests leaving tempora...

2017-05-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3962


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6705) Check visibility of methods in Table API classes

2017-05-24 Thread Timo Walther (JIRA)
Timo Walther created FLINK-6705:
---

 Summary: Check visibility of methods in Table API classes 
 Key: FLINK-6705
 URL: https://issues.apache.org/jira/browse/FLINK-6705
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Timo Walther


If a user is working with the Java version of the Table API, there are many 
methods that should not be exposed. We should try to improve this.

e.g. TableEnvironment has visible methods like {{getFieldIndices}} or 
{{validateType}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6689) Remote StreamExecutionEnvironment fails to submit jobs against LocalFlinkMiniCluster

2017-05-24 Thread Nico Kruber (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber updated FLINK-6689:
---
Description: 
The following Flink programs fails to execute with the current 1.3 branch (1.2 
works) because the leader session ID being used is wrong:

{code:java}
final String jobManagerAddress = "localhost";
final int jobManagerPort = ConfigConstants.DEFAULT_JOB_MANAGER_IPC_PORT;

final Configuration config = new Configuration();
config.setString(ConfigConstants.JOB_MANAGER_IPC_ADDRESS_KEY, 
jobManagerAddress);
config.setInteger(ConfigConstants.JOB_MANAGER_IPC_PORT_KEY, 
jobManagerPort);
config.setBoolean(ConfigConstants.LOCAL_START_WEBSERVER, true);

final LocalFlinkMiniCluster cluster = new LocalFlinkMiniCluster(config, false);
cluster.start(true);

final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.createRemoteEnvironment(jobManagerAddress, 
jobManagerPort);

env.fromElements(1l).addSink(new DiscardingSink());

// fails due to leader session id being wrong:
env.execute("test");
{code}

Output from logs contais:
{code}
...
16:24:57,551 INFO  org.apache.flink.runtime.webmonitor.JobManagerRetriever  
 - New leader reachable under 
akka.tcp://flink@localhost:6123/user/jobmanager:ff0d56cf-6205-4dd4-a266-03847f4d6944.
16:24:57,894 INFO  
org.apache.flink.streaming.api.environment.RemoteStreamEnvironment  - Running 
remotely at localhost:6123
16:24:58,121 INFO  org.apache.flink.client.program.StandaloneClusterClient  
 - Starting client actor system.
16:24:58,123 INFO  org.apache.flink.runtime.util.LeaderRetrievalUtils   
 - Trying to select the network interface and address to use by connecting to 
the leading JobManager.
16:24:58,128 INFO  org.apache.flink.runtime.util.LeaderRetrievalUtils   
 - TaskManager will try to connect for 1 milliseconds before falling back 
to heuristics
16:24:58,132 INFO  org.apache.flink.runtime.net.ConnectionUtils 
 - Retrieved new target address localhost/127.0.0.1:6123.
16:24:58,258 INFO  akka.event.slf4j.Slf4jLogger 
 - Slf4jLogger started
16:24:58,262 INFO  Remoting 
 - Starting remoting
16:24:58,375 INFO  Remoting 
 - Remoting started; listening on addresses 
:[akka.tcp://fl...@nico-work.fritz.box:43413]
16:24:58,376 INFO  org.apache.flink.client.program.StandaloneClusterClient  
 - Submitting job with JobID: 9bef4793a4b7f4caaad96bd28211cbb9. Waiting for job 
completion.
Submitting job with JobID: 9bef4793a4b7f4caaad96bd28211cbb9. Waiting for job 
completion.
16:24:58,382 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor 
 - Disconnect from JobManager null.
16:24:58,398 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor 
 - Received SubmitJobAndWait(JobGraph(jobId: 9bef4793a4b7f4caaad96bd28211cbb9)) 
but there is no connection to a JobManager yet.
16:24:58,398 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor 
 - Received job test (9bef4793a4b7f4caaad96bd28211cbb9).
16:24:58,429 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor 
 - Connect to JobManager 
Actor[akka.tcp://flink@localhost:6123/user/jobmanager#1881889998].
16:24:58,432 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor 
 - Connected to JobManager at 
Actor[akka.tcp://flink@localhost:6123/user/jobmanager#1881889998] with leader 
session id ----.
Connected to JobManager at 
Actor[akka.tcp://flink@localhost:6123/user/jobmanager#1881889998] with leader 
session id ----.
16:24:58,432 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor 
 - Sending message to JobManager 
akka.tcp://flink@localhost:6123/user/jobmanager to submit job test 
(9bef4793a4b7f4caaad96bd28211cbb9) and wait for progress
16:24:58,433 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor 
 - Upload jar files to job manager 
akka.tcp://flink@localhost:6123/user/jobmanager.
16:24:58,440 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor 
 - Submit job to the job manager 
akka.tcp://flink@localhost:6123/user/jobmanager.
16:24:58,522 WARN  org.apache.flink.runtime.jobmanager.JobManager   
 - Discard message 
LeaderSessionMessage(----,SubmitJob(JobGraph(jobId:
 9bef4793a4b7f4caaad96bd28211cbb9),EXECUTION_RESULT_AND_STATE_CHANGES)) because 
the expected leader session ID ff0d56cf-6205-4dd4-a266-03847f4d6944 did not 
equal the received leader session ID ----.

{code}

  was:
The following Flink programs fails to execute with the current 1.3 branch (1.2 
works):

{code:java}
final String jobManagerAddress = "localhost";
final int jobManagerPort = 

[jira] [Commented] (FLINK-6569) flink-table KafkaJsonTableSource example doesn't work

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022956#comment-16022956
 ] 

ASF GitHub Bot commented on FLINK-6569:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3890
  
@haohui can you close this PR?


> flink-table KafkaJsonTableSource example doesn't work
> -
>
> Key: FLINK-6569
> URL: https://issues.apache.org/jira/browse/FLINK-6569
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Robert Metzger
>Assignee: Haohui Mai
> Fix For: 1.3.0
>
>
> The code example uses 
> {code}
> TypeInformation typeInfo = Types.ROW(
>   new String[] { "id", "name", "score" },
>   new TypeInformation[] { Types.INT(), Types.STRING(), Types.DOUBLE() }
> );
> {code}
> the correct way of using it is something like
> {code}
> TypeInformation typeInfo = Types.ROW_NAMED(
> new String[] { "id", "zip", "date" },
> Types.LONG, Types.INT, Types.SQL_DATE);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3890: [FLINK-6569] flink-table KafkaJsonTableSource example doe...

2017-05-24 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3890
  
@haohui can you close this PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6137) Activate strict checkstyle for flink-cep

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022953#comment-16022953
 ] 

ASF GitHub Bot commented on FLINK-6137:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3976#discussion_r118265027
  
--- Diff: 
flink-libraries/flink-cep-scala/src/main/scala/org/apache/flink/cep/scala/pattern/Pattern.scala
 ---
@@ -271,7 +271,7 @@ class Pattern[T , F <: T](jPattern: JPattern[T, F]) {
 * {{{A1 A2 B}}} appears, this will generate patterns:
 * {{{A1 B}}} and {{{A1 A2 B}}}. See also {{{allowCombinations()}}}.
 *
-* @return The same pattern with a [[Quantifier.ONE_OR_MORE()]] 
quantifier applied.
+* @return The same pattern with a [[Quantifier.oneOrMore()]] 
quantifier applied.
--- End diff --

ok then.


> Activate strict checkstyle for flink-cep
> 
>
> Key: FLINK-6137
> URL: https://issues.apache.org/jira/browse/FLINK-6137
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Minor
>
> Add a custom checkstyle.xml for `flink-cep` library as in [FLINK-6107]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6137) Activate strict checkstyle for flink-cep

2017-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022954#comment-16022954
 ] 

ASF GitHub Bot commented on FLINK-6137:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3976#discussion_r118265038
  
--- Diff: 
flink-libraries/flink-cep-scala/src/main/scala/org/apache/flink/cep/scala/pattern/Pattern.scala
 ---
@@ -271,7 +271,7 @@ class Pattern[T , F <: T](jPattern: JPattern[T, F]) {
 * {{{A1 A2 B}}} appears, this will generate patterns:
 * {{{A1 B}}} and {{{A1 A2 B}}}. See also {{{allowCombinations()}}}.
 *
-* @return The same pattern with a [[Quantifier.ONE_OR_MORE()]] 
quantifier applied.
+* @return The same pattern with a [[Quantifier.oneOrMore()]] 
quantifier applied.
--- End diff --

ok then. :)


> Activate strict checkstyle for flink-cep
> 
>
> Key: FLINK-6137
> URL: https://issues.apache.org/jira/browse/FLINK-6137
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Minor
>
> Add a custom checkstyle.xml for `flink-cep` library as in [FLINK-6107]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   3   >