[jira] [Commented] (FLINK-6975) Add CONCAT/CONCAT_WS supported in TableAPI

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086823#comment-16086823
 ] 

ASF GitHub Bot commented on FLINK-6975:
---

Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/4274#discussion_r127382075
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/stringExpressions.scala
 ---
@@ -277,3 +278,47 @@ case class Overlay(
   position.toRexNode)
   }
 }
+
+/**
+  * Returns the string that results from concatenating the arguments.
+  * Returns NULL if any argument is NULL.
+  */
+case class Concat(strings: Seq[Expression]) extends Expression with 
InputTypeSpec {
+
+  override private[flink] def children: Seq[Expression] = strings
+
+  override private[flink] def resultType: TypeInformation[_] = 
BasicTypeInfo.STRING_TYPE_INFO
+
+  override private[flink] def expectedTypes: Seq[TypeInformation[_]] =
+children.map(_ => STRING_TYPE_INFO)
+
+  override def toString: String = s"concat($strings)"
+
+  override private[flink] def toRexNode(implicit relBuilder: RelBuilder): 
RexNode = {
+relBuilder.call(ScalarSqlFunctions.CONCAT, children.map(_.toRexNode))
+  }
+}
+
+/**
+  * Returns the string that results from concatenating the arguments and 
separator.
+  * Returns NULL If the separator is NULL.
+  *
+  * Note: this user-defined function does not skip empty strings. However, 
it does skip any NULL
+  * values after the separator argument.
+  **/
+case class ConcatWs(separator: Expression, strings: Seq[Expression])
--- End diff --

What about make this signature to `args: Seq[Expression]`,  which combines 
`separator` and `strings` before construct `ConcatWs`. So that we do not need 
to change the FunctionCatalog. I think it's fine, because `ConcatWs` is not 
used by users.


> Add CONCAT/CONCAT_WS supported in TableAPI
> --
>
> Key: FLINK-6975
> URL: https://issues.apache.org/jira/browse/FLINK-6975
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>  Labels: starter
>
> See FLINK-6925 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4274: [FLINK-6975][table]Add CONCAT/CONCAT_WS supported ...

2017-07-13 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/4274#discussion_r127382075
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/stringExpressions.scala
 ---
@@ -277,3 +278,47 @@ case class Overlay(
   position.toRexNode)
   }
 }
+
+/**
+  * Returns the string that results from concatenating the arguments.
+  * Returns NULL if any argument is NULL.
+  */
+case class Concat(strings: Seq[Expression]) extends Expression with 
InputTypeSpec {
+
+  override private[flink] def children: Seq[Expression] = strings
+
+  override private[flink] def resultType: TypeInformation[_] = 
BasicTypeInfo.STRING_TYPE_INFO
+
+  override private[flink] def expectedTypes: Seq[TypeInformation[_]] =
+children.map(_ => STRING_TYPE_INFO)
+
+  override def toString: String = s"concat($strings)"
+
+  override private[flink] def toRexNode(implicit relBuilder: RelBuilder): 
RexNode = {
+relBuilder.call(ScalarSqlFunctions.CONCAT, children.map(_.toRexNode))
+  }
+}
+
+/**
+  * Returns the string that results from concatenating the arguments and 
separator.
+  * Returns NULL If the separator is NULL.
+  *
+  * Note: this user-defined function does not skip empty strings. However, 
it does skip any NULL
+  * values after the separator argument.
+  **/
+case class ConcatWs(separator: Expression, strings: Seq[Expression])
--- End diff --

What about make this signature to `args: Seq[Expression]`,  which combines 
`separator` and `strings` before construct `ConcatWs`. So that we do not need 
to change the FunctionCatalog. I think it's fine, because `ConcatWs` is not 
used by users.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7173) Fix the illustration of tumbling window.

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086710#comment-16086710
 ] 

ASF GitHub Bot commented on FLINK-7173:
---

Github user sunjincheng121 closed the pull request at:

https://github.com/apache/flink/pull/4322


> Fix the illustration of tumbling window.
> 
>
> Key: FLINK-7173
> URL: https://issues.apache.org/jira/browse/FLINK-7173
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Documentation
>Reporter: sunjincheng
>Assignee: sunjincheng
> Fix For: 1.4.0
>
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> !screenshot-1.png!
> Change it to :
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4322: [FLINK-7173][doc]Change the illustration of tumbling wind...

2017-07-13 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4322
  
Sure! :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4322: [FLINK-7173][doc]Change the illustration of tumbli...

2017-07-13 Thread sunjincheng121
Github user sunjincheng121 closed the pull request at:

https://github.com/apache/flink/pull/4322


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7173) Fix the illustration of tumbling window.

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086711#comment-16086711
 ] 

ASF GitHub Bot commented on FLINK-7173:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4322
  
Sure! :)


> Fix the illustration of tumbling window.
> 
>
> Key: FLINK-7173
> URL: https://issues.apache.org/jira/browse/FLINK-7173
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Documentation
>Reporter: sunjincheng
>Assignee: sunjincheng
> Fix For: 1.4.0
>
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> !screenshot-1.png!
> Change it to :
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6232) Support proctime inner equi-join between two streams in the SQL API

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086684#comment-16086684
 ] 

ASF GitHub Bot commented on FLINK-6232:
---

Github user hongyuhong closed the pull request at:

https://github.com/apache/flink/pull/3715


> Support proctime inner equi-join between two streams in the SQL API
> ---
>
> Key: FLINK-6232
> URL: https://issues.apache.org/jira/browse/FLINK-6232
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: hongyuhong
>Assignee: hongyuhong
>
> The goal of this issue is to add support for inner equi-join on proc time 
> streams to the SQL interface.
> Queries similar to the following should be supported:
> {code}
> SELECT o.proctime, o.productId, o.orderId, s.proctime AS shipTime 
> FROM Orders AS o 
> JOIN Shipments AS s 
> ON o.orderId = s.orderId 
> AND o.proctime BETWEEN s.proctime AND s.proctime + INTERVAL '1' HOUR;
> {code}
> The following restrictions should initially apply:
> * The join hint only support inner join
> * The ON clause should include equi-join condition
> * The time-condition {{o.proctime BETWEEN s.proctime AND s.proctime + 
> INTERVAL '1' HOUR}} only can use proctime that is a system attribute, the 
> time condition only support bounded time range like {{o.proctime BETWEEN 
> s.proctime - INTERVAL '1' HOUR AND s.proctime + INTERVAL '1' HOUR}}, not 
> support unbounded like {{o.proctime > s.protime}},  and  should include both 
> two stream's proctime attribute, {{o.proctime between proctime() and 
> proctime() + 1}} should also not be supported.
> This issue includes:
> * Design of the DataStream operator to deal with stream join
> * Translation from Calcite's RelNode representation (LogicalJoin). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6232) Support proctime inner equi-join between two streams in the SQL API

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086685#comment-16086685
 ] 

ASF GitHub Bot commented on FLINK-6232:
---

Github user hongyuhong closed the pull request at:

https://github.com/apache/flink/pull/4266


> Support proctime inner equi-join between two streams in the SQL API
> ---
>
> Key: FLINK-6232
> URL: https://issues.apache.org/jira/browse/FLINK-6232
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: hongyuhong
>Assignee: hongyuhong
>
> The goal of this issue is to add support for inner equi-join on proc time 
> streams to the SQL interface.
> Queries similar to the following should be supported:
> {code}
> SELECT o.proctime, o.productId, o.orderId, s.proctime AS shipTime 
> FROM Orders AS o 
> JOIN Shipments AS s 
> ON o.orderId = s.orderId 
> AND o.proctime BETWEEN s.proctime AND s.proctime + INTERVAL '1' HOUR;
> {code}
> The following restrictions should initially apply:
> * The join hint only support inner join
> * The ON clause should include equi-join condition
> * The time-condition {{o.proctime BETWEEN s.proctime AND s.proctime + 
> INTERVAL '1' HOUR}} only can use proctime that is a system attribute, the 
> time condition only support bounded time range like {{o.proctime BETWEEN 
> s.proctime - INTERVAL '1' HOUR AND s.proctime + INTERVAL '1' HOUR}}, not 
> support unbounded like {{o.proctime > s.protime}},  and  should include both 
> two stream's proctime attribute, {{o.proctime between proctime() and 
> proctime() + 1}} should also not be supported.
> This issue includes:
> * Design of the DataStream operator to deal with stream join
> * Translation from Calcite's RelNode representation (LogicalJoin). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4266: [FLINK-6232][Table] support proctime inner win...

2017-07-13 Thread hongyuhong
Github user hongyuhong closed the pull request at:

https://github.com/apache/flink/pull/4266


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3715: [FLINK-6232][Table]Support proctime inner equi...

2017-07-13 Thread hongyuhong
Github user hongyuhong closed the pull request at:

https://github.com/apache/flink/pull/3715


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6493) Ineffective null check in RegisteredOperatorBackendStateMetaInfo#equals()

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086659#comment-16086659
 ] 

ASF GitHub Bot commented on FLINK-6493:
---

GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4328

[FLINK-6493] Fix ineffective null check in RegisteredOperatorBackendS…

This PR is simlar to https://github.com/apache/flink/pull/1871/files. 
@tedyu What do you think of this Change ?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink 
flink-6493-Ineffective-null-check-in-RegisteredOperatorBackendStateMetaInfo#equals

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4328.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4328


commit 5e771d6a877867cf450fd7ce40c069db6ce97482
Author: zhangminglei 
Date:   2017-07-14T00:48:51Z

[FLINK-6493] Fix ineffective null check in 
RegisteredOperatorBackendStateMetaInfo#equals




> Ineffective null check in RegisteredOperatorBackendStateMetaInfo#equals()
> -
>
> Key: FLINK-6493
> URL: https://issues.apache.org/jira/browse/FLINK-6493
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> {code}
> && ((partitionStateSerializer == null && ((Snapshot) 
> obj).getPartitionStateSerializer() == null)
>   || partitionStateSerializer.equals(((Snapshot) 
> obj).getPartitionStateSerializer()))
> && ((partitionStateSerializerConfigSnapshot == null && ((Snapshot) 
> obj).getPartitionStateSerializerConfigSnapshot() == null)
>   || partitionStateSerializerConfigSnapshot.equals(((Snapshot) 
> obj).getPartitionStateSerializerConfigSnapshot()));
> {code}
> The null check for partitionStateSerializer / 
> partitionStateSerializerConfigSnapshot is in combination with another clause.
> This may lead to NPE in the partitionStateSerializer.equals() call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4328: [FLINK-6493] Fix ineffective null check in Registe...

2017-07-13 Thread zhangminglei
GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4328

[FLINK-6493] Fix ineffective null check in RegisteredOperatorBackendS…

This PR is simlar to https://github.com/apache/flink/pull/1871/files. 
@tedyu What do you think of this Change ?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink 
flink-6493-Ineffective-null-check-in-RegisteredOperatorBackendStateMetaInfo#equals

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4328.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4328


commit 5e771d6a877867cf450fd7ce40c069db6ce97482
Author: zhangminglei 
Date:   2017-07-14T00:48:51Z

[FLINK-6493] Fix ineffective null check in 
RegisteredOperatorBackendStateMetaInfo#equals




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6998) Kafka connector needs to expose metrics for failed/successful offset commits in the Kafka Consumer callback

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086587#comment-16086587
 ] 

ASF GitHub Bot commented on FLINK-6998:
---

Github user zhenzhongxu commented on the issue:

https://github.com/apache/flink/pull/4187
  
@tzulitai seems the last CI pipeline failed because of stability issues, 
how can I trigger another build without making a commit?


> Kafka connector needs to expose metrics for failed/successful offset commits 
> in the Kafka Consumer callback
> ---
>
> Key: FLINK-6998
> URL: https://issues.apache.org/jira/browse/FLINK-6998
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Reporter: Zhenzhong Xu
>Assignee: Zhenzhong Xu
>
> Propose to add "kafkaCommitsSucceeded" and "kafkaCommitsFailed" metrics in 
> KafkaConsumerThread class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4187: [FLINK-6998][Kafka Connector] Add kafka offset commit met...

2017-07-13 Thread zhenzhongxu
Github user zhenzhongxu commented on the issue:

https://github.com/apache/flink/pull/4187
  
@tzulitai seems the last CI pipeline failed because of stability issues, 
how can I trigger another build without making a commit?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-6693) Support DATE_FORMAT function in the Table / SQL API

2017-07-13 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-6693.

   Resolution: Implemented
Fix Version/s: 1.4.0

Implemented for 1.4.0 with 3dbbeedb961f19b5ac4fe2d1d28ebb77af986d31

> Support DATE_FORMAT function in the Table / SQL API
> ---
>
> Key: FLINK-6693
> URL: https://issues.apache.org/jira/browse/FLINK-6693
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 1.4.0
>
>
> It would be quite handy to support the {{DATE_FORMAT}} function in Flink to 
> support various date / time related operations:
> The specification of the {{DATE_FORMAT}} function can be found in 
> https://prestodb.io/docs/current/functions/datetime.html.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7154) Missing call to build CsvTableSource example

2017-07-13 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-7154.

   Resolution: Fixed
Fix Version/s: 1.3.2
   1.4.0

Fixed for 1.3.2 with 1ed2ef4389d1814b5353df6675295a93b54cc5c7
Fixed for 1.4.0 with 94d3166b474ca4b4270ee87725ee0f9a08c8bd56

> Missing call to build CsvTableSource example
> 
>
> Key: FLINK-7154
> URL: https://issues.apache.org/jira/browse/FLINK-7154
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Trivial
> Fix For: 1.4.0, 1.3.2
>
>
> The Java and Scala example code for CsvTableSource create a builder but are 
> missing the final call to {{build}}.
> https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/table/sourceSinks.html#csvtablesource



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6693) Support DATE_FORMAT function in the Table / SQL API

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086455#comment-16086455
 ] 

ASF GitHub Bot commented on FLINK-6693:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4078


> Support DATE_FORMAT function in the Table / SQL API
> ---
>
> Key: FLINK-6693
> URL: https://issues.apache.org/jira/browse/FLINK-6693
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> It would be quite handy to support the {{DATE_FORMAT}} function in Flink to 
> support various date / time related operations:
> The specification of the {{DATE_FORMAT}} function can be found in 
> https://prestodb.io/docs/current/functions/datetime.html.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7154) Missing call to build CsvTableSource example

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086456#comment-16086456
 ] 

ASF GitHub Bot commented on FLINK-7154:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4313


> Missing call to build CsvTableSource example
> 
>
> Key: FLINK-7154
> URL: https://issues.apache.org/jira/browse/FLINK-7154
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Trivial
>
> The Java and Scala example code for CsvTableSource create a builder but are 
> missing the final call to {{build}}.
> https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/table/sourceSinks.html#csvtablesource



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4303: fix doc typo in DataStreamRel

2017-07-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4303


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4313: [FLINK-7154] [docs] Missing call to build CsvTable...

2017-07-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4313


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4078: [FLINK-6693] [table] Support DATE_FORMAT function ...

2017-07-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4078


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6951) Incompatible versions of httpcomponents jars for Flink kinesis connector

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086341#comment-16086341
 ] 

ASF GitHub Bot commented on FLINK-6951:
---

Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4150
  
During testing, I saw the following exception:

```java
java.lang.IllegalStateException: Socket not created by this factory
at 
org.apache.flink.kinesis.shaded.org.apache.http.util.Asserts.check(Asserts.java:34)
at 
org.apache.flink.kinesis.shaded.org.apache.http.conn.ssl.SSLSocketFactory.isSecure(SSLSocketFactory.java:435)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:186)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:326)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:610)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:445)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:835)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:1940)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:1910)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.describeStream(AmazonKinesisClient.java:656)
at 
org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxy.describeStream(KinesisProxy.java:363)
at 
org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxy.getShardsOfStream(KinesisProxy.java:325)
at 
org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxy.getShardList(KinesisProxy.java:233)
at 
org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher.discoverNewShardsToSubscribe(KinesisDataFetcher.java:430)
at 
org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer.run(FlinkKinesisConsumer.java:203)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:55)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:95)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:262)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:748)
```

According to 
https://github.com/Jean-Emile/org.apache.httpclient/blob/master/src/main/java/org/apache/http/conn/ssl/SSLSocketFactory.java,
 it seems a different kind of socket is passed around when shading 
httpcomponents. I'll see if I have time to dig deeper. Any insight is highly 
appreciated.


> Incompatible versions of httpcomponents jars for Flink kinesis connector
> 
>
> Key: FLINK-6951
> URL: https://issues.apache.org/jira/browse/FLINK-6951
> Project: Flink
>  Issue Type: Bug
>  Components: Kinesis Connector
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Bowen Li
>Priority: Blocker
> Fix For: 1.4.0, 1.3.2
>
>
> In the following thread, Bowen reported incompatible versions of 
> httpcomponents jars for Flink kinesis connector :
> http://search-hadoop.com/m/Flink/VkLeQN2m5EySpb1?subj=Re+Incompatible+Apache+Http+lib+in+Flink+kinesis+connector
> We should find a solution such that users don't have to change dependency 
> version(s) themselves when building Flink kinesis connector.



--
This message 

[GitHub] flink issue #4150: [FLINK-6951] Incompatible versions of httpcomponents jars...

2017-07-13 Thread bowenli86
Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4150
  
During testing, I saw the following exception:

```java
java.lang.IllegalStateException: Socket not created by this factory
at 
org.apache.flink.kinesis.shaded.org.apache.http.util.Asserts.check(Asserts.java:34)
at 
org.apache.flink.kinesis.shaded.org.apache.http.conn.ssl.SSLSocketFactory.isSecure(SSLSocketFactory.java:435)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:186)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:326)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:610)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:445)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:835)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:1940)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:1910)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.describeStream(AmazonKinesisClient.java:656)
at 
org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxy.describeStream(KinesisProxy.java:363)
at 
org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxy.getShardsOfStream(KinesisProxy.java:325)
at 
org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxy.getShardList(KinesisProxy.java:233)
at 
org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher.discoverNewShardsToSubscribe(KinesisDataFetcher.java:430)
at 
org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer.run(FlinkKinesisConsumer.java:203)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:55)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:95)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:262)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:748)
```

According to 
https://github.com/Jean-Emile/org.apache.httpclient/blob/master/src/main/java/org/apache/http/conn/ssl/SSLSocketFactory.java,
 it seems a different kind of socket is passed around when shading 
httpcomponents. I'll see if I have time to dig deeper. Any insight is highly 
appreciated.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7178) Datadog Metric Reporter Jar is Lacking Dependencies

2017-07-13 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086330#comment-16086330
 ] 

Chesnay Schepler commented on FLINK-7178:
-

The maven-shade plugin was not configured correctly and creates a separate 
shaded jar.

This shaded jar is also deployed (along with an un-shaded jar); as a workaround 
for 1.3.1 you can add {{shaded}} to the dependency.

> Datadog Metric Reporter Jar is Lacking Dependencies
> ---
>
> Key: FLINK-7178
> URL: https://issues.apache.org/jira/browse/FLINK-7178
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.3.1, 1.4.0
>Reporter: Elias Levy
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.3.2
>
>
> The Datadog metric reporter has dependencies on {{com.squareup.okhttp3}} and 
> {{com.squareup.okio}}.  It appears there was an attempt to Maven Shade 
> plug-in to move these classes to {{org.apache.flink.shaded.okhttp3}} and 
> {{org.apache.flink.shaded.okio}} during packaging.  Alas, the shaded classes 
> are not included in the {{flink-metrics-datadog-1.3.1.jar}} released to Maven 
> Central.  Using the Jar results in an error when the Jobmanager or 
> Taskmanager starts up because of the missing dependencies. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4327: [FLINK-7178] [metrics] Do not create separate shad...

2017-07-13 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4327

[FLINK-7178] [metrics] Do not create separate shaded jars

Backport of #4326 for 1.3.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7178_13

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4327.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4327


commit 5a5cea743c370382f8580aa33b0b62c62c8c62f8
Author: zentol 
Date:   2017-07-13T20:06:56Z

[FLINK-7178] [metrics] Do not create separate shaded jars




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7178) Datadog Metric Reporter Jar is Lacking Dependencies

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086325#comment-16086325
 ] 

ASF GitHub Bot commented on FLINK-7178:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4327

[FLINK-7178] [metrics] Do not create separate shaded jars

Backport of #4326 for 1.3.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7178_13

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4327.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4327


commit 5a5cea743c370382f8580aa33b0b62c62c8c62f8
Author: zentol 
Date:   2017-07-13T20:06:56Z

[FLINK-7178] [metrics] Do not create separate shaded jars




> Datadog Metric Reporter Jar is Lacking Dependencies
> ---
>
> Key: FLINK-7178
> URL: https://issues.apache.org/jira/browse/FLINK-7178
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.3.1, 1.4.0
>Reporter: Elias Levy
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.3.2
>
>
> The Datadog metric reporter has dependencies on {{com.squareup.okhttp3}} and 
> {{com.squareup.okio}}.  It appears there was an attempt to Maven Shade 
> plug-in to move these classes to {{org.apache.flink.shaded.okhttp3}} and 
> {{org.apache.flink.shaded.okio}} during packaging.  Alas, the shaded classes 
> are not included in the {{flink-metrics-datadog-1.3.1.jar}} released to Maven 
> Central.  Using the Jar results in an error when the Jobmanager or 
> Taskmanager starts up because of the missing dependencies. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7178) Datadog Metric Reporter Jar is Lacking Dependencies

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086322#comment-16086322
 ] 

ASF GitHub Bot commented on FLINK-7178:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4326

[FLINK-7178] [metrics] Do not create separate shaded jars



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7178

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4326.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4326


commit 4b88f85d0ab0bdd65fc49fa7c27ba61a3309d2b4
Author: zentol 
Date:   2017-07-13T20:06:56Z

[FLINK-7178] [metrics] Do not create separate shaded jars




> Datadog Metric Reporter Jar is Lacking Dependencies
> ---
>
> Key: FLINK-7178
> URL: https://issues.apache.org/jira/browse/FLINK-7178
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.3.1, 1.4.0
>Reporter: Elias Levy
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.3.2
>
>
> The Datadog metric reporter has dependencies on {{com.squareup.okhttp3}} and 
> {{com.squareup.okio}}.  It appears there was an attempt to Maven Shade 
> plug-in to move these classes to {{org.apache.flink.shaded.okhttp3}} and 
> {{org.apache.flink.shaded.okio}} during packaging.  Alas, the shaded classes 
> are not included in the {{flink-metrics-datadog-1.3.1.jar}} released to Maven 
> Central.  Using the Jar results in an error when the Jobmanager or 
> Taskmanager starts up because of the missing dependencies. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7178) Datadog Metric Reporter Jar is Lacking Dependencies

2017-07-13 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086321#comment-16086321
 ] 

Chesnay Schepler commented on FLINK-7178:
-

[~aljoscha] We got a another blocker here.

> Datadog Metric Reporter Jar is Lacking Dependencies
> ---
>
> Key: FLINK-7178
> URL: https://issues.apache.org/jira/browse/FLINK-7178
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.3.1, 1.4.0
>Reporter: Elias Levy
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.3.2
>
>
> The Datadog metric reporter has dependencies on {{com.squareup.okhttp3}} and 
> {{com.squareup.okio}}.  It appears there was an attempt to Maven Shade 
> plug-in to move these classes to {{org.apache.flink.shaded.okhttp3}} and 
> {{org.apache.flink.shaded.okio}} during packaging.  Alas, the shaded classes 
> are not included in the {{flink-metrics-datadog-1.3.1.jar}} released to Maven 
> Central.  Using the Jar results in an error when the Jobmanager or 
> Taskmanager starts up because of the missing dependencies. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4326: [FLINK-7178] [metrics] Do not create separate shad...

2017-07-13 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4326

[FLINK-7178] [metrics] Do not create separate shaded jars



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7178

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4326.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4326


commit 4b88f85d0ab0bdd65fc49fa7c27ba61a3309d2b4
Author: zentol 
Date:   2017-07-13T20:06:56Z

[FLINK-7178] [metrics] Do not create separate shaded jars




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-7178) Datadog Metric Reporter Jar is Lacking Dependencies

2017-07-13 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-7178:

Priority: Blocker  (was: Major)

> Datadog Metric Reporter Jar is Lacking Dependencies
> ---
>
> Key: FLINK-7178
> URL: https://issues.apache.org/jira/browse/FLINK-7178
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.3.1, 1.4.0
>Reporter: Elias Levy
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.3.2
>
>
> The Datadog metric reporter has dependencies on {{com.squareup.okhttp3}} and 
> {{com.squareup.okio}}.  It appears there was an attempt to Maven Shade 
> plug-in to move these classes to {{org.apache.flink.shaded.okhttp3}} and 
> {{org.apache.flink.shaded.okio}} during packaging.  Alas, the shaded classes 
> are not included in the {{flink-metrics-datadog-1.3.1.jar}} released to Maven 
> Central.  Using the Jar results in an error when the Jobmanager or 
> Taskmanager starts up because of the missing dependencies. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7178) Datadog Metric Reporter Jar is Lacking Dependencies

2017-07-13 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-7178:

Affects Version/s: 1.4.0

> Datadog Metric Reporter Jar is Lacking Dependencies
> ---
>
> Key: FLINK-7178
> URL: https://issues.apache.org/jira/browse/FLINK-7178
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.3.1, 1.4.0
>Reporter: Elias Levy
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.3.2
>
>
> The Datadog metric reporter has dependencies on {{com.squareup.okhttp3}} and 
> {{com.squareup.okio}}.  It appears there was an attempt to Maven Shade 
> plug-in to move these classes to {{org.apache.flink.shaded.okhttp3}} and 
> {{org.apache.flink.shaded.okio}} during packaging.  Alas, the shaded classes 
> are not included in the {{flink-metrics-datadog-1.3.1.jar}} released to Maven 
> Central.  Using the Jar results in an error when the Jobmanager or 
> Taskmanager starts up because of the missing dependencies. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7178) Datadog Metric Reporter Jar is Lacking Dependencies

2017-07-13 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-7178:

Fix Version/s: 1.3.2
   1.4.0

> Datadog Metric Reporter Jar is Lacking Dependencies
> ---
>
> Key: FLINK-7178
> URL: https://issues.apache.org/jira/browse/FLINK-7178
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.3.1, 1.4.0
>Reporter: Elias Levy
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.3.2
>
>
> The Datadog metric reporter has dependencies on {{com.squareup.okhttp3}} and 
> {{com.squareup.okio}}.  It appears there was an attempt to Maven Shade 
> plug-in to move these classes to {{org.apache.flink.shaded.okhttp3}} and 
> {{org.apache.flink.shaded.okio}} during packaging.  Alas, the shaded classes 
> are not included in the {{flink-metrics-datadog-1.3.1.jar}} released to Maven 
> Central.  Using the Jar results in an error when the Jobmanager or 
> Taskmanager starts up because of the missing dependencies. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7178) Datadog Metric Reporter Jar is Lacking Dependencies

2017-07-13 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-7178:
---

Assignee: Chesnay Schepler

> Datadog Metric Reporter Jar is Lacking Dependencies
> ---
>
> Key: FLINK-7178
> URL: https://issues.apache.org/jira/browse/FLINK-7178
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.3.1
>Reporter: Elias Levy
>Assignee: Chesnay Schepler
>
> The Datadog metric reporter has dependencies on {{com.squareup.okhttp3}} and 
> {{com.squareup.okio}}.  It appears there was an attempt to Maven Shade 
> plug-in to move these classes to {{org.apache.flink.shaded.okhttp3}} and 
> {{org.apache.flink.shaded.okio}} during packaging.  Alas, the shaded classes 
> are not included in the {{flink-metrics-datadog-1.3.1.jar}} released to Maven 
> Central.  Using the Jar results in an error when the Jobmanager or 
> Taskmanager starts up because of the missing dependencies. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7179) Projectable ProjectableTableSource interface doesn't compatible with BoundedOutOfOrdernessTimestampExtractor

2017-07-13 Thread Zhenqiu Huang (JIRA)
Zhenqiu Huang created FLINK-7179:


 Summary: Projectable ProjectableTableSource interface doesn't 
compatible with BoundedOutOfOrdernessTimestampExtractor
 Key: FLINK-7179
 URL: https://issues.apache.org/jira/browse/FLINK-7179
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Affects Versions: 1.3.1
Reporter: Zhenqiu Huang


In the implementation of window of stream sql, 
BoundedOutOfOrdernessTimestampExtractor is designed to extract row time from 
each row. It assumes the ts field is in the data stream by default. On the 
other hand, ProjectableTableSource is designed to help projection push down. If 
there is no row time related field in a query, the extractor can't function 
well. 









--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7179) ProjectableTableSource interface doesn't compatible with BoundedOutOfOrdernessTimestampExtractor

2017-07-13 Thread Zhenqiu Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenqiu Huang updated FLINK-7179:
-
Summary: ProjectableTableSource interface doesn't compatible with 
BoundedOutOfOrdernessTimestampExtractor  (was: Projectable 
ProjectableTableSource interface doesn't compatible with 
BoundedOutOfOrdernessTimestampExtractor)

> ProjectableTableSource interface doesn't compatible with 
> BoundedOutOfOrdernessTimestampExtractor
> 
>
> Key: FLINK-7179
> URL: https://issues.apache.org/jira/browse/FLINK-7179
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.1
>Reporter: Zhenqiu Huang
>
> In the implementation of window of stream sql, 
> BoundedOutOfOrdernessTimestampExtractor is designed to extract row time from 
> each row. It assumes the ts field is in the data stream by default. On the 
> other hand, ProjectableTableSource is designed to help projection push down. 
> If there is no row time related field in a query, the extractor can't 
> function well. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7178) Datadog Metric Reporter Jar is Lacking Dependencies

2017-07-13 Thread Elias Levy (JIRA)
Elias Levy created FLINK-7178:
-

 Summary: Datadog Metric Reporter Jar is Lacking Dependencies
 Key: FLINK-7178
 URL: https://issues.apache.org/jira/browse/FLINK-7178
 Project: Flink
  Issue Type: Bug
  Components: Metrics
Affects Versions: 1.3.1
Reporter: Elias Levy


The Datadog metric reporter has dependencies on {{com.squareup.okhttp3}} and 
{{com.squareup.okio}}.  It appears there was an attempt to Maven Shade plug-in 
to move these classes to {{org.apache.flink.shaded.okhttp3}} and 
{{org.apache.flink.shaded.okio}} during packaging.  Alas, the shaded classes 
are not included in the {{flink-metrics-datadog-1.3.1.jar}} released to Maven 
Central.  Using the Jar results in an error when the Jobmanager or Taskmanager 
starts up because of the missing dependencies. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7177) [table] Using Table API to perform aggregation on another Table API / SQL result table causes runVolcanoPlanner failed on physicalPlan generation

2017-07-13 Thread Rong Rong (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong updated FLINK-7177:
-
Component/s: Table API & SQL

> [table] Using Table API to perform aggregation on another Table API / SQL 
> result table causes runVolcanoPlanner failed on physicalPlan generation
> -
>
> Key: FLINK-7177
> URL: https://issues.apache.org/jira/browse/FLINK-7177
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.1
>Reporter: Rong Rong
>
> For example:
> {code:title=flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/AggregationsITCase.scala|borderStyle=solid}
>   @Test
>   def testTableAggregationWithMultipleTableAPI(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> val inputTable = 
> CollectionDataSets.getSmallNestedTupleDataSet(env).toTable(tEnv, 'a, 'b)
> tEnv.registerDataSet("MyTable", inputTable)
> val resultTable = tEnv.scan("MyTable").select('a, 'b).where('a.get("_1") 
> > 0)
> val failingTable = resultTable.select('a.get("_1").avg, 'a.get("_2").sum, 
> 'b.count)
> val expected = "2,6,3"
> val results = result.toDataSet[Row].collect()
> TestBaseUtils.compareResultAsText(results.asJava, expected)
>   }
> {code}
> Details can be found in: 
> https://github.com/apache/flink/compare/master...walterddr:bug_report_sql_query_result_consume_by_table_api



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7177) [table] Using Table API to perform aggregation on another Table API / SQL result table causes runVolcanoPlanner failed on physicalPlan generation

2017-07-13 Thread Rong Rong (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong updated FLINK-7177:
-
Description: 
For example:
{code:title=flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/AggregationsITCase.scala|borderStyle=solid}

  @Test
  def testTableAggregationWithMultipleTableAPI(): Unit = {
val env = ExecutionEnvironment.getExecutionEnvironment
val tEnv = TableEnvironment.getTableEnvironment(env, config)

val inputTable = 
CollectionDataSets.getSmallNestedTupleDataSet(env).toTable(tEnv, 'a, 'b)
tEnv.registerDataSet("MyTable", inputTable)

val resultTable = tEnv.scan("MyTable").select('a, 'b).where('a.get("_1") > 
0)
val failingTable = resultTable.select('a.get("_1").avg, 'a.get("_2").sum, 
'b.count)

val expected = "2,6,3"
val results = result.toDataSet[Row].collect()
TestBaseUtils.compareResultAsText(results.asJava, expected)
  }
{code}

Details can be found in: 
https://github.com/apache/flink/compare/master...walterddr:bug_report_sql_query_result_consume_by_table_api

  was:
For example:
{code:title=flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/AggregationsITCase.scala|borderStyle=solid}
val env = ExecutionEnvironment.getExecutionEnvironment
val tEnv = TableEnvironment.getTableEnvironment(env, config)
val inputTable = 
CollectionDataSets.getSmallNestedTupleDataSet(env).toTable(tEnv, 'a, 'b)
val resultTable = inputTable.select('a, 'b).where('a.get("_1") > 0)
val failingTable = resultTable.select('a.get("_1").avg, 'a.get("_2").sum, 
'b.count)
{code}

Details can be found in: 
https://github.com/apache/flink/compare/master...walterddr:bug_report_sql_query_result_consume_by_table_api


> [table] Using Table API to perform aggregation on another Table API / SQL 
> result table causes runVolcanoPlanner failed on physicalPlan generation
> -
>
> Key: FLINK-7177
> URL: https://issues.apache.org/jira/browse/FLINK-7177
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.3.1
>Reporter: Rong Rong
>
> For example:
> {code:title=flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/AggregationsITCase.scala|borderStyle=solid}
>   @Test
>   def testTableAggregationWithMultipleTableAPI(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> val inputTable = 
> CollectionDataSets.getSmallNestedTupleDataSet(env).toTable(tEnv, 'a, 'b)
> tEnv.registerDataSet("MyTable", inputTable)
> val resultTable = tEnv.scan("MyTable").select('a, 'b).where('a.get("_1") 
> > 0)
> val failingTable = resultTable.select('a.get("_1").avg, 'a.get("_2").sum, 
> 'b.count)
> val expected = "2,6,3"
> val results = result.toDataSet[Row].collect()
> TestBaseUtils.compareResultAsText(results.asJava, expected)
>   }
> {code}
> Details can be found in: 
> https://github.com/apache/flink/compare/master...walterddr:bug_report_sql_query_result_consume_by_table_api



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7177) [table] Using Table API to perform aggregation on another Table API / SQL result table causes runVolcanoPlanner failed on physicalPlan generation

2017-07-13 Thread Rong Rong (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong updated FLINK-7177:
-
Affects Version/s: 1.3.1

> [table] Using Table API to perform aggregation on another Table API / SQL 
> result table causes runVolcanoPlanner failed on physicalPlan generation
> -
>
> Key: FLINK-7177
> URL: https://issues.apache.org/jira/browse/FLINK-7177
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.3.1
>Reporter: Rong Rong
>
> For example:
> {code:title=flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/AggregationsITCase.scala|borderStyle=solid}
>   @Test
>   def testTableAggregationWithMultipleTableAPI(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> val inputTable = 
> CollectionDataSets.getSmallNestedTupleDataSet(env).toTable(tEnv, 'a, 'b)
> tEnv.registerDataSet("MyTable", inputTable)
> val resultTable = tEnv.scan("MyTable").select('a, 'b).where('a.get("_1") 
> > 0)
> val failingTable = resultTable.select('a.get("_1").avg, 'a.get("_2").sum, 
> 'b.count)
> val expected = "2,6,3"
> val results = result.toDataSet[Row].collect()
> TestBaseUtils.compareResultAsText(results.asJava, expected)
>   }
> {code}
> Details can be found in: 
> https://github.com/apache/flink/compare/master...walterddr:bug_report_sql_query_result_consume_by_table_api



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7177) [table] Using Table API to perform aggregation on another Table API / SQL result table causes runVolcanoPlanner failed on physicalPlan generation

2017-07-13 Thread Rong Rong (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong updated FLINK-7177:
-
Description: 
For example:
{code:title=flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/AggregationsITCase.scala|borderStyle=solid}
val env = ExecutionEnvironment.getExecutionEnvironment
val tEnv = TableEnvironment.getTableEnvironment(env, config)
val inputTable = 
CollectionDataSets.getSmallNestedTupleDataSet(env).toTable(tEnv, 'a, 'b)
val resultTable = inputTable.select('a, 'b).where('a.get("_1") > 0)
val failingTable = resultTable.select('a.get("_1").avg, 'a.get("_2").sum, 
'b.count)
{code}

Details can be found in: 
https://github.com/apache/flink/compare/master...walterddr:bug_report_sql_query_result_consume_by_table_api

  was:
For example:
```
val env = ExecutionEnvironment.getExecutionEnvironment
val tEnv = TableEnvironment.getTableEnvironment(env, config)
val inputTable = 
CollectionDataSets.getSmallNestedTupleDataSet(env).toTable(tEnv, 'a, 'b)
val resultTable = inputTable.select('a, 'b).where('a.get("_1") > 0)
val failingTable = resultTable.select('a.get("_1").avg, 'a.get("_2").sum, 
'b.count)
```

Details can be found in: 
https://github.com/apache/flink/compare/master...walterddr:bug_report_sql_query_result_consume_by_table_api


> [table] Using Table API to perform aggregation on another Table API / SQL 
> result table causes runVolcanoPlanner failed on physicalPlan generation
> -
>
> Key: FLINK-7177
> URL: https://issues.apache.org/jira/browse/FLINK-7177
> Project: Flink
>  Issue Type: Bug
>Reporter: Rong Rong
>
> For example:
> {code:title=flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/AggregationsITCase.scala|borderStyle=solid}
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> val inputTable = 
> CollectionDataSets.getSmallNestedTupleDataSet(env).toTable(tEnv, 'a, 'b)
> val resultTable = inputTable.select('a, 'b).where('a.get("_1") > 0)
> val failingTable = resultTable.select('a.get("_1").avg, 'a.get("_2").sum, 
> 'b.count)
> {code}
> Details can be found in: 
> https://github.com/apache/flink/compare/master...walterddr:bug_report_sql_query_result_consume_by_table_api



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7177) [table] Using Table API to perform aggregation on another Table API / SQL result table causes runVolcanoPlanner failed on physicalPlan generation

2017-07-13 Thread Rong Rong (JIRA)
Rong Rong created FLINK-7177:


 Summary: [table] Using Table API to perform aggregation on another 
Table API / SQL result table causes runVolcanoPlanner failed on physicalPlan 
generation
 Key: FLINK-7177
 URL: https://issues.apache.org/jira/browse/FLINK-7177
 Project: Flink
  Issue Type: Bug
Reporter: Rong Rong


For example:
```
val env = ExecutionEnvironment.getExecutionEnvironment
val tEnv = TableEnvironment.getTableEnvironment(env, config)
val inputTable = 
CollectionDataSets.getSmallNestedTupleDataSet(env).toTable(tEnv, 'a, 'b)
val resultTable = inputTable.select('a, 'b).where('a.get("_1") > 0)
val failingTable = resultTable.select('a.get("_1").avg, 'a.get("_2").sum, 
'b.count)
```

Details can be found in: 
https://github.com/apache/flink/compare/master...walterddr:bug_report_sql_query_result_consume_by_table_api



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7176) Failed builds (due to compilation) don't upload logs

2017-07-13 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086108#comment-16086108
 ] 

Chesnay Schepler commented on FLINK-7176:
-

This is mostly an issue for 1.3, where the compilation and tests are done in a 
single step.

> Failed builds (due to compilation) don't upload logs
> 
>
> Key: FLINK-7176
> URL: https://issues.apache.org/jira/browse/FLINK-7176
> Project: Flink
>  Issue Type: Bug
>  Components: Travis
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.3.0, 1.4.0
>
>
> If the compile phase fails on travis {{flink-dist}} may not be created. This 
> causes the check for the inclusion of snappy in {{flink-dist}} to fail.
> The function doing this check calls {{exit 1}} on error, which exits the 
> entire shell, thus skipping subsequent actions like the upload of logs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7174) Bump dependency of Kafka 0.10.x to the latest one

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086104#comment-16086104
 ] 

ASF GitHub Bot commented on FLINK-7174:
---

Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4321
  
Hmm, will blocking operation be appropriate here? This would prevent 
`shutdown()` from actually breaking the loop. I think we would need some 
timeout here?


> Bump dependency of Kafka 0.10.x to the latest one
> -
>
> Key: FLINK-7174
> URL: https://issues.apache.org/jira/browse/FLINK-7174
> Project: Flink
>  Issue Type: Improvement
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> We are using pretty old Kafka version for 0.10. Besides any bug fixes and 
> improvements that were made between 0.10.0.1 and 0.10.2.1, it 0.10.2.1 
> version is more similar to 0.11.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-13 Thread pnowojski
Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4321
  
Hmm, will blocking operation be appropriate here? This would prevent 
`shutdown()` from actually breaking the loop. I think we would need some 
timeout here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7166) generated avro sources not cleaned up or re-created after changes

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086069#comment-16086069
 ] 

ASF GitHub Bot commented on FLINK-7166:
---

Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4309
  
I ran into this too. Thanks for the PR!

Shall we also create a ticket to move generated files out of `src`?


> generated avro sources not cleaned up or re-created after changes
> -
>
> Key: FLINK-7166
> URL: https://issues.apache.org/jira/browse/FLINK-7166
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>
> Since the AVRO upgrade to 1.8.2, I could compile the flink-avro module any 
> more with a failure like this in {{mvn clean install -DskipTests -pl 
> flink-connectors/flink-avro}}:
> {code}
> Compilation failure
> [ERROR] 
> flink-connectors/flink-avro/src/test/java/org/apache/flink/api/io/avro/generated/Fixed16.java:[10,8]
>  org.apache.flink.api.io.avro.generated.Fixed16 is not abstract and does not 
> override abstract method readExternal(java.io.ObjectInput) in 
> org.apache.avro.specific.SpecificFixed
> {code}
> This was caused by maven both not cleaning up the generated sources and also 
> not overwriting them with new ones itself. Only a manual {{rm -rf 
> flink-connectors/flink-avro/src/test/java/org/apache/flink/api/io/avro/generated}}
>  solved the issue.
> The cause for this, though, is that the avro files are generated under the 
> {{src}} directory, not {{target/generated-test-sources}} as they should be. 
> Either the generated sources should be cleaned up as well, or the generated 
> files should be moved to this directory which is a more invasive change due 
> to some hacks with respect to these files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4309: [FLINK-7166][avro] cleanup generated test classes in the ...

2017-07-13 Thread bowenli86
Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4309
  
I ran into this too. Thanks for the PR!

Shall we also create a ticket to move generated files out of `src`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7175) Add simple benchmark suite for Flink

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086043#comment-16086043
 ] 

ASF GitHub Bot commented on FLINK-7175:
---

Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4323
  
Thanks for pointing this out. Indeed we will probably have to move it to 
separate project in that case.


> Add simple benchmark suite for Flink
> 
>
> Key: FLINK-7175
> URL: https://issues.apache.org/jira/browse/FLINK-7175
> Project: Flink
>  Issue Type: Improvement
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> For a long term goal it would be great to have both full scale (with real 
> clusters) and micro benchmarks suites that runs automatically against each PR 
> and constantly against master branch.
> First step towards this is to implement some local micro benchmarks, that run 
> some simple Flink applications on local Flink cluster. Developers could use 
> those benchmarks manually to test their changes.
> After that, we could setup some simple automation tool that would run those 
> benchmarks against master branch on some old computer/laptop. We could 
> publish it to our instance of [Codespeed|https://github.com/tobami/codespeed] 
> project (example instance for pypy project: [PyPy Speed 
> Center|http://speed.pypy.org/])



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4323: [FLINK-7175] Add first simplest Flink benchmark

2017-07-13 Thread pnowojski
Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4323
  
Thanks for pointing this out. Indeed we will probably have to move it to 
separate project in that case.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5541) Missing null check for localJar in FlinkSubmitter#submitTopology()

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086028#comment-16086028
 ] 

ASF GitHub Bot commented on FLINK-5541:
---

Github user tedyu commented on the issue:

https://github.com/apache/flink/pull/4315
  
lgtm


> Missing null check for localJar in FlinkSubmitter#submitTopology()
> --
>
> Key: FLINK-5541
> URL: https://issues.apache.org/jira/browse/FLINK-5541
> Project: Flink
>  Issue Type: Bug
>  Components: Storm Compatibility
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> {code}
>   if (localJar == null) {
> try {
>   for (final URL url : ((ContextEnvironment) 
> ExecutionEnvironment.getExecutionEnvironment())
>   .getJars()) {
> // TODO verify that there is only one jar
> localJar = new File(url.toURI()).getAbsolutePath();
>   }
> } catch (final URISyntaxException e) {
>   // ignore
> } catch (final ClassCastException e) {
>   // ignore
> }
>   }
>   logger.info("Submitting topology " + name + " in distributed mode with 
> conf " + serConf);
>   client.submitTopologyWithOpts(name, localJar, topology);
> {code}
> Since the try block may encounter URISyntaxException / ClassCastException, we 
> should check that localJar is not null before calling 
> submitTopologyWithOpts().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4315: [FLINK-5541] Missing null check for localJar in FlinkSubm...

2017-07-13 Thread tedyu
Github user tedyu commented on the issue:

https://github.com/apache/flink/pull/4315
  
lgtm


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4323: [FLINK-7175] Add first simplest Flink benchmark

2017-07-13 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4323
  
Another idea: a separate repo but never released since this looks to be for 
observation by developers rather than the community at large (results could be 
summarized in the documentation).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7175) Add simple benchmark suite for Flink

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086026#comment-16086026
 ] 

ASF GitHub Bot commented on FLINK-7175:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4323
  
Another idea: a separate repo but never released since this looks to be for 
observation by developers rather than the community at large (results could be 
summarized in the documentation).


> Add simple benchmark suite for Flink
> 
>
> Key: FLINK-7175
> URL: https://issues.apache.org/jira/browse/FLINK-7175
> Project: Flink
>  Issue Type: Improvement
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> For a long term goal it would be great to have both full scale (with real 
> clusters) and micro benchmarks suites that runs automatically against each PR 
> and constantly against master branch.
> First step towards this is to implement some local micro benchmarks, that run 
> some simple Flink applications on local Flink cluster. Developers could use 
> those benchmarks manually to test their changes.
> After that, we could setup some simple automation tool that would run those 
> benchmarks against master branch on some old computer/laptop. We could 
> publish it to our instance of [Codespeed|https://github.com/tobami/codespeed] 
> project (example instance for pypy project: [PyPy Speed 
> Center|http://speed.pypy.org/])



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7174) Bump dependency of Kafka 0.10.x to the latest one

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086022#comment-16086022
 ] 

ASF GitHub Bot commented on FLINK-7174:
---

Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4321
  
Good catch with with this spinning, I missed that.

Checking per each iteration for assigned partitions is unfortunately 
costly, because there is no cheap `isEmpty()` method. The one that I have found 
`consumer.assignment()` is pretty costly (creates quite a lot of objects and 
takes some locks). I wouldn't want to call it very often.

I could move this variable to local scope of `run()` function, but it would 
be a little bit more error prone (in case some refactoring and for example 
calling `reassignPartitions()` from somewhere else outside of the `run()` 
method).


> Bump dependency of Kafka 0.10.x to the latest one
> -
>
> Key: FLINK-7174
> URL: https://issues.apache.org/jira/browse/FLINK-7174
> Project: Flink
>  Issue Type: Improvement
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> We are using pretty old Kafka version for 0.10. Besides any bug fixes and 
> improvements that were made between 0.10.0.1 and 0.10.2.1, it 0.10.2.1 
> version is more similar to 0.11.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-13 Thread pnowojski
Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4321
  
Good catch with with this spinning, I missed that.

Checking per each iteration for assigned partitions is unfortunately 
costly, because there is no cheap `isEmpty()` method. The one that I have found 
`consumer.assignment()` is pretty costly (creates quite a lot of objects and 
takes some locks). I wouldn't want to call it very often.

I could move this variable to local scope of `run()` function, but it would 
be a little bit more error prone (in case some refactoring and for example 
calling `reassignPartitions()` from somewhere else outside of the `run()` 
method).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7175) Add simple benchmark suite for Flink

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086020#comment-16086020
 ] 

ASF GitHub Bot commented on FLINK-7175:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4323
  
JMH's GPLv2 license is not compatible with the ASL (see 2063fa12)? This 
would need to be a separate repo distinct from the Apache Flink project and 
licensed under the GPL.


> Add simple benchmark suite for Flink
> 
>
> Key: FLINK-7175
> URL: https://issues.apache.org/jira/browse/FLINK-7175
> Project: Flink
>  Issue Type: Improvement
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> For a long term goal it would be great to have both full scale (with real 
> clusters) and micro benchmarks suites that runs automatically against each PR 
> and constantly against master branch.
> First step towards this is to implement some local micro benchmarks, that run 
> some simple Flink applications on local Flink cluster. Developers could use 
> those benchmarks manually to test their changes.
> After that, we could setup some simple automation tool that would run those 
> benchmarks against master branch on some old computer/laptop. We could 
> publish it to our instance of [Codespeed|https://github.com/tobami/codespeed] 
> project (example instance for pypy project: [PyPy Speed 
> Center|http://speed.pypy.org/])



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4323: [FLINK-7175] Add first simplest Flink benchmark

2017-07-13 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4323
  
JMH's GPLv2 license is not compatible with the ASL (see 2063fa12)? This 
would need to be a separate repo distinct from the Apache Flink project and 
licensed under the GPL.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-7176) Failed builds (due to compilation) don't upload logs

2017-07-13 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-7176:

Affects Version/s: (was: 1.3.2)
   1.3.0

> Failed builds (due to compilation) don't upload logs
> 
>
> Key: FLINK-7176
> URL: https://issues.apache.org/jira/browse/FLINK-7176
> Project: Flink
>  Issue Type: Bug
>  Components: Travis
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.3.0, 1.4.0
>
>
> If the compile phase fails on travis {{flink-dist}} may not be created. This 
> causes the check for the inclusion of snappy in {{flink-dist}} to fail.
> The function doing this check calls {{exit 1}} on error, which exits the 
> entire shell, thus skipping subsequent actions like the upload of logs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7176) Failed builds (due to compilation) don't upload logs

2017-07-13 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-7176:

Fix Version/s: 1.3.2

> Failed builds (due to compilation) don't upload logs
> 
>
> Key: FLINK-7176
> URL: https://issues.apache.org/jira/browse/FLINK-7176
> Project: Flink
>  Issue Type: Bug
>  Components: Travis
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.3.0, 1.4.0
>
>
> If the compile phase fails on travis {{flink-dist}} may not be created. This 
> causes the check for the inclusion of snappy in {{flink-dist}} to fail.
> The function doing this check calls {{exit 1}} on error, which exits the 
> entire shell, thus skipping subsequent actions like the upload of logs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7176) Failed builds (due to compilation) don't upload logs

2017-07-13 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-7176:

Affects Version/s: 1.3.2

> Failed builds (due to compilation) don't upload logs
> 
>
> Key: FLINK-7176
> URL: https://issues.apache.org/jira/browse/FLINK-7176
> Project: Flink
>  Issue Type: Bug
>  Components: Travis
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.3.0, 1.4.0
>
>
> If the compile phase fails on travis {{flink-dist}} may not be created. This 
> causes the check for the inclusion of snappy in {{flink-dist}} to fail.
> The function doing this check calls {{exit 1}} on error, which exits the 
> entire shell, thus skipping subsequent actions like the upload of logs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7176) Failed builds (due to compilation) don't upload logs

2017-07-13 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-7176:

Fix Version/s: (was: 1.3.2)
   1.3.0

> Failed builds (due to compilation) don't upload logs
> 
>
> Key: FLINK-7176
> URL: https://issues.apache.org/jira/browse/FLINK-7176
> Project: Flink
>  Issue Type: Bug
>  Components: Travis
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.3.0, 1.4.0
>
>
> If the compile phase fails on travis {{flink-dist}} may not be created. This 
> causes the check for the inclusion of snappy in {{flink-dist}} to fail.
> The function doing this check calls {{exit 1}} on error, which exits the 
> entire shell, thus skipping subsequent actions like the upload of logs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4325: [hotfix] [hadoopCompat] Fix tests to verify result...

2017-07-13 Thread fhueske
GitHub user fhueske opened a pull request:

https://github.com/apache/flink/pull/4325

[hotfix] [hadoopCompat] Fix tests to verify results new Hadoop input API.

The tests of the reworked Hadoop input API (reworked to remove the Hadoop 
dependency on `flink-java`) did not validate their result.

This PR adds the result validation.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fhueske/flink hadoopTests

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4325.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4325


commit e99e67fc89d75388d88efaa6e1c1a5b102e1855c
Author: Fabian Hueske 
Date:   2017-07-11T13:33:22Z

[hotfix] [hadoopCompat] Fix tests to verify results new Hadoop input API.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6232) Support proctime inner equi-join between two streams in the SQL API

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085955#comment-16085955
 ] 

ASF GitHub Bot commented on FLINK-6232:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4266
  
Hi @hongyuhong and @wuchong, I opened a new PR which extends this PR. 
Please have a look and give feedback.

@hongyuhong can you close the PRs #3715 and this one? 

Thank you, Fabian


> Support proctime inner equi-join between two streams in the SQL API
> ---
>
> Key: FLINK-6232
> URL: https://issues.apache.org/jira/browse/FLINK-6232
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: hongyuhong
>Assignee: hongyuhong
>
> The goal of this issue is to add support for inner equi-join on proc time 
> streams to the SQL interface.
> Queries similar to the following should be supported:
> {code}
> SELECT o.proctime, o.productId, o.orderId, s.proctime AS shipTime 
> FROM Orders AS o 
> JOIN Shipments AS s 
> ON o.orderId = s.orderId 
> AND o.proctime BETWEEN s.proctime AND s.proctime + INTERVAL '1' HOUR;
> {code}
> The following restrictions should initially apply:
> * The join hint only support inner join
> * The ON clause should include equi-join condition
> * The time-condition {{o.proctime BETWEEN s.proctime AND s.proctime + 
> INTERVAL '1' HOUR}} only can use proctime that is a system attribute, the 
> time condition only support bounded time range like {{o.proctime BETWEEN 
> s.proctime - INTERVAL '1' HOUR AND s.proctime + INTERVAL '1' HOUR}}, not 
> support unbounded like {{o.proctime > s.protime}},  and  should include both 
> two stream's proctime attribute, {{o.proctime between proctime() and 
> proctime() + 1}} should also not be supported.
> This issue includes:
> * Design of the DataStream operator to deal with stream join
> * Translation from Calcite's RelNode representation (LogicalJoin). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4266: [FLINK-6232][Table] support proctime inner windowed s...

2017-07-13 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4266
  
Hi @hongyuhong and @wuchong, I opened a new PR which extends this PR. 
Please have a look and give feedback.

@hongyuhong can you close the PRs #3715 and this one? 

Thank you, Fabian


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4324: [FLINK-6232] [table] Add processing time window in...

2017-07-13 Thread fhueske
GitHub user fhueske opened a pull request:

https://github.com/apache/flink/pull/4324

[FLINK-6232] [table] Add processing time window inner join to SQL.

This is continuation and extension of PR #3715 and #4266 by @hongyuhong.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fhueske/flink table-join

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4324.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4324


commit 10b219678f13c0c21889f97f267dcf4c517045e5
Author: hongyuhong 
Date:   2017-07-06T03:24:04Z

[FLINK-6232] [table] Add support for processing time inner windowed stream 
join.

commit 3d671a2d1867aea2f3d4eee30b2772045917d6d4
Author: Fabian Hueske 
Date:   2017-07-12T22:49:30Z

[FLINK-6232] [table] Add SQL documentation for time window join.

- Add support for window join predicates in WHERE clause.
- Refactoring of WindowJoinUtil.
- Minor refactorings of join classes.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6232) Support proctime inner equi-join between two streams in the SQL API

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085952#comment-16085952
 ] 

ASF GitHub Bot commented on FLINK-6232:
---

GitHub user fhueske opened a pull request:

https://github.com/apache/flink/pull/4324

[FLINK-6232] [table] Add processing time window inner join to SQL.

This is continuation and extension of PR #3715 and #4266 by @hongyuhong.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fhueske/flink table-join

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4324.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4324


commit 10b219678f13c0c21889f97f267dcf4c517045e5
Author: hongyuhong 
Date:   2017-07-06T03:24:04Z

[FLINK-6232] [table] Add support for processing time inner windowed stream 
join.

commit 3d671a2d1867aea2f3d4eee30b2772045917d6d4
Author: Fabian Hueske 
Date:   2017-07-12T22:49:30Z

[FLINK-6232] [table] Add SQL documentation for time window join.

- Add support for window join predicates in WHERE clause.
- Refactoring of WindowJoinUtil.
- Minor refactorings of join classes.




> Support proctime inner equi-join between two streams in the SQL API
> ---
>
> Key: FLINK-6232
> URL: https://issues.apache.org/jira/browse/FLINK-6232
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: hongyuhong
>Assignee: hongyuhong
>
> The goal of this issue is to add support for inner equi-join on proc time 
> streams to the SQL interface.
> Queries similar to the following should be supported:
> {code}
> SELECT o.proctime, o.productId, o.orderId, s.proctime AS shipTime 
> FROM Orders AS o 
> JOIN Shipments AS s 
> ON o.orderId = s.orderId 
> AND o.proctime BETWEEN s.proctime AND s.proctime + INTERVAL '1' HOUR;
> {code}
> The following restrictions should initially apply:
> * The join hint only support inner join
> * The ON clause should include equi-join condition
> * The time-condition {{o.proctime BETWEEN s.proctime AND s.proctime + 
> INTERVAL '1' HOUR}} only can use proctime that is a system attribute, the 
> time condition only support bounded time range like {{o.proctime BETWEEN 
> s.proctime - INTERVAL '1' HOUR AND s.proctime + INTERVAL '1' HOUR}}, not 
> support unbounded like {{o.proctime > s.protime}},  and  should include both 
> two stream's proctime attribute, {{o.proctime between proctime() and 
> proctime() + 1}} should also not be supported.
> This issue includes:
> * Design of the DataStream operator to deal with stream join
> * Translation from Calcite's RelNode representation (LogicalJoin). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7153) Eager Scheduling can't allocate source for ExecutionGraph correctly

2017-07-13 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085926#comment-16085926
 ] 

Stephan Ewen commented on FLINK-7153:
-

I am happy to look at your suggested fix...

> Eager Scheduling can't allocate source for ExecutionGraph correctly
> ---
>
> Key: FLINK-7153
> URL: https://issues.apache.org/jira/browse/FLINK-7153
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.3.1
>Reporter: Sihua Zhou
>Assignee: Stephan Ewen
>Priority: Blocker
> Fix For: 1.3.2
>
>
> The ExecutionGraph.scheduleEager() function allocate for ExecutionJobVertex 
> one by one via calling ExecutionJobVertex.allocateResourcesForAll(), here is 
> two problem about it:
> 1. The ExecutionVertex.getPreferredLocationsBasedOnInputs will always return 
> empty, cause `sourceSlot` always be null until `ExectionVertex` has been 
> deployed via 'Execution.deployToSlot()'. So allocate resource base on 
> prefered location can't work correctly, we need to set the slot info for 
> `Execution` as soon as Execution.allocateSlotForExecution() called 
> successfully?
> 2. Current allocate strategy can't allocate the slot optimize.  Here is the 
> test case:
> {code}
> JobVertex v1 = new JobVertex("v1", jid1);
> JobVertex v2 = new JobVertex("v2", jid2);
> SlotSharingGroup group = new SlotSharingGroup();
> v1.setSlotSharingGroup(group);
> v2.setSlotSharingGroup(group);
> v1.setParallelism(2);
> v2.setParallelism(4);
> v1.setInvokableClass(BatchTask.class);
> v2.setInvokableClass(BatchTask.class);
> v2.connectNewDataSetAsInput(v1, DistributionPattern.POINTWISE, 
> ResultPartitionType.PIPELINED_BOUNDED);
> {code}
> Currently, after allocate for v1,v2, we got a local partition and three 
> remote partition. But actually, it should be 2 local partition and 2 remote 
> partition. 
> The causes of the above problems is becuase that the current allocate 
> strategy is allocate the resource for execution one by one(if the execution 
> can allocate from SlotGroup than get it, Otherwise ask for a new one for it). 
> If we change the allocate strategy to two step will solve this problem, below 
> is the Pseudo code:
> {code}
> for (ExecutionJobVertex ejv: getVerticesTopologically) {
> //step 1: try to allocate from SlothGroup base on inputs one by one (which 
> only allocate resource base on location).
> //step 2: allocate for the remain execution.
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7174) Bump dependency of Kafka 0.10.x to the latest one

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085894#comment-16085894
 ] 

ASF GitHub Bot commented on FLINK-7174:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/4321
  
I think this pull request will make the Kafka consumer go into a hot busy 
waiting loop when it has no partitions assigned.

I would suggest to do a blocking `take()` or so on the 
`unassignedPartitionsQueue`.

Also, would be great to get around the instance variable, and simply check 
how many partitions are assigned on the KafkaConsumer, or pass this via a 
return value of the `reassignPartitions()` function.


> Bump dependency of Kafka 0.10.x to the latest one
> -
>
> Key: FLINK-7174
> URL: https://issues.apache.org/jira/browse/FLINK-7174
> Project: Flink
>  Issue Type: Improvement
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> We are using pretty old Kafka version for 0.10. Besides any bug fixes and 
> improvements that were made between 0.10.0.1 and 0.10.2.1, it 0.10.2.1 
> version is more similar to 0.11.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-13 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/4321
  
I think this pull request will make the Kafka consumer go into a hot busy 
waiting loop when it has no partitions assigned.

I would suggest to do a blocking `take()` or so on the 
`unassignedPartitionsQueue`.

Also, would be great to get around the instance variable, and simply check 
how many partitions are assigned on the KafkaConsumer, or pass this via a 
return value of the `reassignPartitions()` function.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7175) Add simple benchmark suite for Flink

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085882#comment-16085882
 ] 

ASF GitHub Bot commented on FLINK-7175:
---

GitHub user pnowojski opened a pull request:

https://github.com/apache/flink/pull/4323

[FLINK-7175] Add first simplest Flink benchmark

Example output:

```
Benchmark   (objectReuse)  (parallelism)  
(stateBackend)   Mode  Cnt Score Error   Units
EventCountBenchmark.benchmarkCount   true  1  
memory  thrpt5  4433.286 ± 777.338  ops/ms
EventCountBenchmark.benchmarkCount   true  1   
rocks  thrpt5   569.078 ±  15.298  ops/ms
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pnowojski/flink benchmarks

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4323.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4323


commit af6ea42ecdac723cef36f9d994cac3c680114ddd
Author: Piotr Nowojski 
Date:   2017-07-05T11:39:20Z

[FLINK-7175] Add first simplest Flink benchmark

Example output:

Benchmark   (objectReuse)  (parallelism)  
(stateBackend)   Mode  Cnt Score Error   Units
EventCountBenchmark.benchmarkCount   true  1  
memory  thrpt5  4433.286 ± 777.338  ops/ms
EventCountBenchmark.benchmarkCount   true  1   
rocks  thrpt5   569.078 ±  15.298  ops/ms




> Add simple benchmark suite for Flink
> 
>
> Key: FLINK-7175
> URL: https://issues.apache.org/jira/browse/FLINK-7175
> Project: Flink
>  Issue Type: Improvement
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> For a long term goal it would be great to have both full scale (with real 
> clusters) and micro benchmarks suites that runs automatically against each PR 
> and constantly against master branch.
> First step towards this is to implement some local micro benchmarks, that run 
> some simple Flink applications on local Flink cluster. Developers could use 
> those benchmarks manually to test their changes.
> After that, we could setup some simple automation tool that would run those 
> benchmarks against master branch on some old computer/laptop. We could 
> publish it to our instance of [Codespeed|https://github.com/tobami/codespeed] 
> project (example instance for pypy project: [PyPy Speed 
> Center|http://speed.pypy.org/])



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4323: [FLINK-7175] Add first simplest Flink benchmark

2017-07-13 Thread pnowojski
GitHub user pnowojski opened a pull request:

https://github.com/apache/flink/pull/4323

[FLINK-7175] Add first simplest Flink benchmark

Example output:

```
Benchmark   (objectReuse)  (parallelism)  
(stateBackend)   Mode  Cnt Score Error   Units
EventCountBenchmark.benchmarkCount   true  1  
memory  thrpt5  4433.286 ± 777.338  ops/ms
EventCountBenchmark.benchmarkCount   true  1   
rocks  thrpt5   569.078 ±  15.298  ops/ms
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pnowojski/flink benchmarks

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4323.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4323


commit af6ea42ecdac723cef36f9d994cac3c680114ddd
Author: Piotr Nowojski 
Date:   2017-07-05T11:39:20Z

[FLINK-7175] Add first simplest Flink benchmark

Example output:

Benchmark   (objectReuse)  (parallelism)  
(stateBackend)   Mode  Cnt Score Error   Units
EventCountBenchmark.benchmarkCount   true  1  
memory  thrpt5  4433.286 ± 777.338  ops/ms
EventCountBenchmark.benchmarkCount   true  1   
rocks  thrpt5   569.078 ±  15.298  ops/ms




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-7176) Failed builds (due to compilation) don't upload logs

2017-07-13 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-7176:
---

 Summary: Failed builds (due to compilation) don't upload logs
 Key: FLINK-7176
 URL: https://issues.apache.org/jira/browse/FLINK-7176
 Project: Flink
  Issue Type: Bug
  Components: Travis
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0


If the compile phase fails on travis {{flink-dist}} may not be created. This 
causes the check for the inclusion of snappy in {{flink-dist}} to fail.

The function doing this check calls {{exit 1}} on error, which exits the entire 
shell, thus skipping subsequent actions like the upload of logs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7173) Fix the illustration of tumbling window.

2017-07-13 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-7173.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

Fixed in
2ad8e81aff51328a85fade89cce469a236301136
cf791bd508980f3f0fd6ee2c5051aaa04957d934

> Fix the illustration of tumbling window.
> 
>
> Key: FLINK-7173
> URL: https://issues.apache.org/jira/browse/FLINK-7173
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Documentation
>Reporter: sunjincheng
>Assignee: sunjincheng
> Fix For: 1.4.0
>
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> !screenshot-1.png!
> Change it to :
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7173) Fix the illustration of tumbling window.

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085870#comment-16085870
 ] 

ASF GitHub Bot commented on FLINK-7173:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4322
  
@sunjincheng121 Thanks a lot! I merged, could you please close this PR?


> Fix the illustration of tumbling window.
> 
>
> Key: FLINK-7173
> URL: https://issues.apache.org/jira/browse/FLINK-7173
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Documentation
>Reporter: sunjincheng
>Assignee: sunjincheng
> Fix For: 1.4.0
>
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> !screenshot-1.png!
> Change it to :
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4322: [FLINK-7173][doc]Change the illustration of tumbling wind...

2017-07-13 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4322
  
@sunjincheng121 Thanks a lot! I merged, could you please close this PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6975) Add CONCAT/CONCAT_WS supported in TableAPI

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085835#comment-16085835
 ] 

ASF GitHub Bot commented on FLINK-6975:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4274
  
@wuchong  Thanks for your reviewing. I have update the PR according your 
comments.
Thanks, Jincheng


> Add CONCAT/CONCAT_WS supported in TableAPI
> --
>
> Key: FLINK-6975
> URL: https://issues.apache.org/jira/browse/FLINK-6975
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>  Labels: starter
>
> See FLINK-6925 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4274: [FLINK-6975][table]Add CONCAT/CONCAT_WS supported in Tabl...

2017-07-13 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4274
  
@wuchong  Thanks for your reviewing. I have update the PR according your 
comments.
Thanks, Jincheng


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-7175) Add simple benchmark suite for Flink

2017-07-13 Thread Piotr Nowojski (JIRA)
Piotr Nowojski created FLINK-7175:
-

 Summary: Add simple benchmark suite for Flink
 Key: FLINK-7175
 URL: https://issues.apache.org/jira/browse/FLINK-7175
 Project: Flink
  Issue Type: Improvement
Reporter: Piotr Nowojski
Assignee: Piotr Nowojski


For a long term goal it would be great to have both full scale (with real 
clusters) and micro benchmarks suites that runs automatically against each PR 
and constantly against master branch.

First step towards this is to implement some local micro benchmarks, that run 
some simple Flink applications on local Flink cluster. Developers could use 
those benchmarks manually to test their changes.

After that, we could setup some simple automation tool that would run those 
benchmarks against master branch on some old computer/laptop. We could publish 
it to our instance of [Codespeed|https://github.com/tobami/codespeed] project 
(example instance for pypy project: [PyPy Speed Center|http://speed.pypy.org/])



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6964) Fix recovery for incremental checkpoints in StandaloneCompletedCheckpointStore

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085791#comment-16085791
 ] 

ASF GitHub Bot commented on FLINK-6964:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4192
  
This looks super nice now! I tried it and disabled each of the two fixes 
(registering shared state after restore and backend UUIDs) and reverting either 
made the new integration test fail.

Let's give @StephanEwen some time to look over this as well but from my 
side this LGTM now.

When merging, the changes to `SharedStateRegistry` and 
`ZooKeeperCompletedCheckpointStoreTest` should be factored out into a separate 
commit. They are good changes but unrelated to the Jira issue and bug that this 
fixes.


> Fix recovery for incremental checkpoints in StandaloneCompletedCheckpointStore
> --
>
> Key: FLINK-6964
> URL: https://issues.apache.org/jira/browse/FLINK-6964
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Blocker
> Fix For: 1.3.2
>
>
> {{StandaloneCompletedCheckpointStore}} does not register shared states ion 
> resume. However, for externalized checkpoints, it register the checkpoint 
> from which it resumed. This checkpoint gets added to the completed checkpoint 
> store as part of resume.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4192: [FLINK-6964] [checkpoint] Fix externalized incremental ch...

2017-07-13 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4192
  
This looks super nice now! I tried it and disabled each of the two fixes 
(registering shared state after restore and backend UUIDs) and reverting either 
made the new integration test fail.

Let's give @StephanEwen some time to look over this as well but from my 
side this LGTM now.

When merging, the changes to `SharedStateRegistry` and 
`ZooKeeperCompletedCheckpointStoreTest` should be factored out into a separate 
commit. They are good changes but unrelated to the Jira issue and bug that this 
fixes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6617) Improve JAVA and SCALA logical plans consistent test

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085761#comment-16085761
 ] 

ASF GitHub Bot commented on FLINK-6617:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3943


> Improve JAVA and SCALA logical plans consistent test
> 
>
> Key: FLINK-6617
> URL: https://issues.apache.org/jira/browse/FLINK-6617
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
> Fix For: 1.4.0
>
>
> Currently,we need some `StringExpression` test,for all JAVA and SCALA API.
> Such as:`GroupAggregations`,`GroupWindowAggregaton`(Session,Tumble),`Calc` 
> etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #3943: [FLINK-6617][table] Improve JAVA and SCALA logical...

2017-07-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3943


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7153) Eager Scheduling can't allocate source for ExecutionGraph correctly

2017-07-13 Thread Sihua Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085706#comment-16085706
 ] 

Sihua Zhou commented on FLINK-7153:
---

[~StephanEwen] Thanks for you reply, can i contribute to this issue, i'm very 
interested in flink? Indeed, i have fixed this issue in my fork code. I can 
show you the revised plan if you like.

> Eager Scheduling can't allocate source for ExecutionGraph correctly
> ---
>
> Key: FLINK-7153
> URL: https://issues.apache.org/jira/browse/FLINK-7153
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.3.1
>Reporter: Sihua Zhou
>Assignee: Stephan Ewen
>Priority: Blocker
> Fix For: 1.3.2
>
>
> The ExecutionGraph.scheduleEager() function allocate for ExecutionJobVertex 
> one by one via calling ExecutionJobVertex.allocateResourcesForAll(), here is 
> two problem about it:
> 1. The ExecutionVertex.getPreferredLocationsBasedOnInputs will always return 
> empty, cause `sourceSlot` always be null until `ExectionVertex` has been 
> deployed via 'Execution.deployToSlot()'. So allocate resource base on 
> prefered location can't work correctly, we need to set the slot info for 
> `Execution` as soon as Execution.allocateSlotForExecution() called 
> successfully?
> 2. Current allocate strategy can't allocate the slot optimize.  Here is the 
> test case:
> {code}
> JobVertex v1 = new JobVertex("v1", jid1);
> JobVertex v2 = new JobVertex("v2", jid2);
> SlotSharingGroup group = new SlotSharingGroup();
> v1.setSlotSharingGroup(group);
> v2.setSlotSharingGroup(group);
> v1.setParallelism(2);
> v2.setParallelism(4);
> v1.setInvokableClass(BatchTask.class);
> v2.setInvokableClass(BatchTask.class);
> v2.connectNewDataSetAsInput(v1, DistributionPattern.POINTWISE, 
> ResultPartitionType.PIPELINED_BOUNDED);
> {code}
> Currently, after allocate for v1,v2, we got a local partition and three 
> remote partition. But actually, it should be 2 local partition and 2 remote 
> partition. 
> The causes of the above problems is becuase that the current allocate 
> strategy is allocate the resource for execution one by one(if the execution 
> can allocate from SlotGroup than get it, Otherwise ask for a new one for it). 
> If we change the allocate strategy to two step will solve this problem, below 
> is the Pseudo code:
> {code}
> for (ExecutionJobVertex ejv: getVerticesTopologically) {
> //step 1: try to allocate from SlothGroup base on inputs one by one (which 
> only allocate resource base on location).
> //step 2: allocate for the remain execution.
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7153) Eager Scheduling can't allocate source for ExecutionGraph correctly

2017-07-13 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen reassigned FLINK-7153:
---

Assignee: Stephan Ewen

> Eager Scheduling can't allocate source for ExecutionGraph correctly
> ---
>
> Key: FLINK-7153
> URL: https://issues.apache.org/jira/browse/FLINK-7153
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.3.1
>Reporter: Sihua Zhou
>Assignee: Stephan Ewen
> Fix For: 1.3.2
>
>
> The ExecutionGraph.scheduleEager() function allocate for ExecutionJobVertex 
> one by one via calling ExecutionJobVertex.allocateResourcesForAll(), here is 
> two problem about it:
> 1. The ExecutionVertex.getPreferredLocationsBasedOnInputs will always return 
> empty, cause `sourceSlot` always be null until `ExectionVertex` has been 
> deployed via 'Execution.deployToSlot()'. So allocate resource base on 
> prefered location can't work correctly, we need to set the slot info for 
> `Execution` as soon as Execution.allocateSlotForExecution() called 
> successfully?
> 2. Current allocate strategy can't allocate the slot optimize.  Here is the 
> test case:
> {code}
> JobVertex v1 = new JobVertex("v1", jid1);
> JobVertex v2 = new JobVertex("v2", jid2);
> SlotSharingGroup group = new SlotSharingGroup();
> v1.setSlotSharingGroup(group);
> v2.setSlotSharingGroup(group);
> v1.setParallelism(2);
> v2.setParallelism(4);
> v1.setInvokableClass(BatchTask.class);
> v2.setInvokableClass(BatchTask.class);
> v2.connectNewDataSetAsInput(v1, DistributionPattern.POINTWISE, 
> ResultPartitionType.PIPELINED_BOUNDED);
> {code}
> Currently, after allocate for v1,v2, we got a local partition and three 
> remote partition. But actually, it should be 2 local partition and 2 remote 
> partition. 
> The causes of the above problems is becuase that the current allocate 
> strategy is allocate the resource for execution one by one(if the execution 
> can allocate from SlotGroup than get it, Otherwise ask for a new one for it). 
> If we change the allocate strategy to two step will solve this problem, below 
> is the Pseudo code:
> {code}
> for (ExecutionJobVertex ejv: getVerticesTopologically) {
> //step 1: try to allocate from SlothGroup base on inputs one by one (which 
> only allocate resource base on location).
> //step 2: allocate for the remain execution.
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7153) Eager Scheduling can't allocate source for ExecutionGraph correctly

2017-07-13 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-7153:

Fix Version/s: 1.3.2

> Eager Scheduling can't allocate source for ExecutionGraph correctly
> ---
>
> Key: FLINK-7153
> URL: https://issues.apache.org/jira/browse/FLINK-7153
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.3.1
>Reporter: Sihua Zhou
>Assignee: Stephan Ewen
>Priority: Blocker
> Fix For: 1.3.2
>
>
> The ExecutionGraph.scheduleEager() function allocate for ExecutionJobVertex 
> one by one via calling ExecutionJobVertex.allocateResourcesForAll(), here is 
> two problem about it:
> 1. The ExecutionVertex.getPreferredLocationsBasedOnInputs will always return 
> empty, cause `sourceSlot` always be null until `ExectionVertex` has been 
> deployed via 'Execution.deployToSlot()'. So allocate resource base on 
> prefered location can't work correctly, we need to set the slot info for 
> `Execution` as soon as Execution.allocateSlotForExecution() called 
> successfully?
> 2. Current allocate strategy can't allocate the slot optimize.  Here is the 
> test case:
> {code}
> JobVertex v1 = new JobVertex("v1", jid1);
> JobVertex v2 = new JobVertex("v2", jid2);
> SlotSharingGroup group = new SlotSharingGroup();
> v1.setSlotSharingGroup(group);
> v2.setSlotSharingGroup(group);
> v1.setParallelism(2);
> v2.setParallelism(4);
> v1.setInvokableClass(BatchTask.class);
> v2.setInvokableClass(BatchTask.class);
> v2.connectNewDataSetAsInput(v1, DistributionPattern.POINTWISE, 
> ResultPartitionType.PIPELINED_BOUNDED);
> {code}
> Currently, after allocate for v1,v2, we got a local partition and three 
> remote partition. But actually, it should be 2 local partition and 2 remote 
> partition. 
> The causes of the above problems is becuase that the current allocate 
> strategy is allocate the resource for execution one by one(if the execution 
> can allocate from SlotGroup than get it, Otherwise ask for a new one for it). 
> If we change the allocate strategy to two step will solve this problem, below 
> is the Pseudo code:
> {code}
> for (ExecutionJobVertex ejv: getVerticesTopologically) {
> //step 1: try to allocate from SlothGroup base on inputs one by one (which 
> only allocate resource base on location).
> //step 2: allocate for the remain execution.
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7153) Eager Scheduling can't allocate source for ExecutionGraph correctly

2017-07-13 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-7153:

Priority: Blocker  (was: Major)

> Eager Scheduling can't allocate source for ExecutionGraph correctly
> ---
>
> Key: FLINK-7153
> URL: https://issues.apache.org/jira/browse/FLINK-7153
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.3.1
>Reporter: Sihua Zhou
>Assignee: Stephan Ewen
>Priority: Blocker
> Fix For: 1.3.2
>
>
> The ExecutionGraph.scheduleEager() function allocate for ExecutionJobVertex 
> one by one via calling ExecutionJobVertex.allocateResourcesForAll(), here is 
> two problem about it:
> 1. The ExecutionVertex.getPreferredLocationsBasedOnInputs will always return 
> empty, cause `sourceSlot` always be null until `ExectionVertex` has been 
> deployed via 'Execution.deployToSlot()'. So allocate resource base on 
> prefered location can't work correctly, we need to set the slot info for 
> `Execution` as soon as Execution.allocateSlotForExecution() called 
> successfully?
> 2. Current allocate strategy can't allocate the slot optimize.  Here is the 
> test case:
> {code}
> JobVertex v1 = new JobVertex("v1", jid1);
> JobVertex v2 = new JobVertex("v2", jid2);
> SlotSharingGroup group = new SlotSharingGroup();
> v1.setSlotSharingGroup(group);
> v2.setSlotSharingGroup(group);
> v1.setParallelism(2);
> v2.setParallelism(4);
> v1.setInvokableClass(BatchTask.class);
> v2.setInvokableClass(BatchTask.class);
> v2.connectNewDataSetAsInput(v1, DistributionPattern.POINTWISE, 
> ResultPartitionType.PIPELINED_BOUNDED);
> {code}
> Currently, after allocate for v1,v2, we got a local partition and three 
> remote partition. But actually, it should be 2 local partition and 2 remote 
> partition. 
> The causes of the above problems is becuase that the current allocate 
> strategy is allocate the resource for execution one by one(if the execution 
> can allocate from SlotGroup than get it, Otherwise ask for a new one for it). 
> If we change the allocate strategy to two step will solve this problem, below 
> is the Pseudo code:
> {code}
> for (ExecutionJobVertex ejv: getVerticesTopologically) {
> //step 1: try to allocate from SlothGroup base on inputs one by one (which 
> only allocate resource base on location).
> //step 2: allocate for the remain execution.
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7153) Eager Scheduling can't allocate source for ExecutionGraph correctly

2017-07-13 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085699#comment-16085699
 ] 

Stephan Ewen commented on FLINK-7153:
-

Confirmed this bug. Will look into it and see that we provide a fix ASAP...

> Eager Scheduling can't allocate source for ExecutionGraph correctly
> ---
>
> Key: FLINK-7153
> URL: https://issues.apache.org/jira/browse/FLINK-7153
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.3.1
>Reporter: Sihua Zhou
>
> The ExecutionGraph.scheduleEager() function allocate for ExecutionJobVertex 
> one by one via calling ExecutionJobVertex.allocateResourcesForAll(), here is 
> two problem about it:
> 1. The ExecutionVertex.getPreferredLocationsBasedOnInputs will always return 
> empty, cause `sourceSlot` always be null until `ExectionVertex` has been 
> deployed via 'Execution.deployToSlot()'. So allocate resource base on 
> prefered location can't work correctly, we need to set the slot info for 
> `Execution` as soon as Execution.allocateSlotForExecution() called 
> successfully?
> 2. Current allocate strategy can't allocate the slot optimize.  Here is the 
> test case:
> {code}
> JobVertex v1 = new JobVertex("v1", jid1);
> JobVertex v2 = new JobVertex("v2", jid2);
> SlotSharingGroup group = new SlotSharingGroup();
> v1.setSlotSharingGroup(group);
> v2.setSlotSharingGroup(group);
> v1.setParallelism(2);
> v2.setParallelism(4);
> v1.setInvokableClass(BatchTask.class);
> v2.setInvokableClass(BatchTask.class);
> v2.connectNewDataSetAsInput(v1, DistributionPattern.POINTWISE, 
> ResultPartitionType.PIPELINED_BOUNDED);
> {code}
> Currently, after allocate for v1,v2, we got a local partition and three 
> remote partition. But actually, it should be 2 local partition and 2 remote 
> partition. 
> The causes of the above problems is becuase that the current allocate 
> strategy is allocate the resource for execution one by one(if the execution 
> can allocate from SlotGroup than get it, Otherwise ask for a new one for it). 
> If we change the allocate strategy to two step will solve this problem, below 
> is the Pseudo code:
> {code}
> for (ExecutionJobVertex ejv: getVerticesTopologically) {
> //step 1: try to allocate from SlothGroup base on inputs one by one (which 
> only allocate resource base on location).
> //step 2: allocate for the remain execution.
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4322: [FLINK-7173][doc]Change the illustration of tumbling wind...

2017-07-13 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4322
  
Yes @aljoscha I agree with you. I have updated the PR. -:)

Thanks, Jincheng


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7173) Fix the illustration of tumbling window.

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085668#comment-16085668
 ] 

ASF GitHub Bot commented on FLINK-7173:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4322
  
Yes @aljoscha I agree with you. I have updated the PR. -:)

Thanks, Jincheng


> Fix the illustration of tumbling window.
> 
>
> Key: FLINK-7173
> URL: https://issues.apache.org/jira/browse/FLINK-7173
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Documentation
>Reporter: sunjincheng
>Assignee: sunjincheng
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> !screenshot-1.png!
> Change it to :
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7169) Support AFTER MATCH SKIP function in CEP library API

2017-07-13 Thread Dawid Wysakowicz (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085643#comment-16085643
 ] 

Dawid Wysakowicz commented on FLINK-7169:
-

I had not enough time to think of all cases ,but also had something like 
[~dian.fu] said in mind. 

> Support AFTER MATCH SKIP function in CEP library API
> 
>
> Key: FLINK-7169
> URL: https://issues.apache.org/jira/browse/FLINK-7169
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Yueting Chen
>Assignee: Yueting Chen
>
> In order to support Oracle's MATCH_RECOGNIZE on top of the CEP library, we 
> need to support AFTER MATCH SKIP function in CEP API.
> There're four options in AFTER MATCH SKIP, listed as follows:
> 1. AFTER MATCH SKIP TO NEXT ROW: resume pattern matching at the row after the 
> first row of the current match.
> 2. AFTER MATCH SKIP PAST LAST ROW: resume pattern matching at the next row 
> after the last row of the current match.
> 3. AFTER MATCH SKIP TO FIST *RPV*: resume pattern matching at the first row 
> that is mapped to the row pattern variable RPV.
> 4. AFTER MATCH SKIP TO LAST *RPV*: resume pattern matching at the last row 
> that is mapped to the row pattern variable RPV.
> I think we can introduce a new function to `CEP` class, which takes a new 
> parameter as AfterMatchSKipStrategy.
> The new API may looks like this
> {code}
> public static  PatternStream pattern(DataStream input, Pattern 
> pattern, AfterMatchSkipStrategy afterMatchSkipStrategy) 
> {code}
> We can also make `SKIP TO NEXT ROW` as the default option, because that's 
> what CEP library behaves currently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7173) Fix the illustration of tumbling window.

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085642#comment-16085642
 ] 

ASF GitHub Bot commented on FLINK-7173:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4322
  
While on this, I think it would also make sense to fix the picture for 
sliding windows, what do you think?


> Fix the illustration of tumbling window.
> 
>
> Key: FLINK-7173
> URL: https://issues.apache.org/jira/browse/FLINK-7173
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Documentation
>Reporter: sunjincheng
>Assignee: sunjincheng
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> !screenshot-1.png!
> Change it to :
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4322: [FLINK-7173][doc]Change the illustration of tumbling wind...

2017-07-13 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4322
  
While on this, I think it would also make sense to fix the picture for 
sliding windows, what do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7169) Support AFTER MATCH SKIP function in CEP library API

2017-07-13 Thread Dian Fu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085637#comment-16085637
 ] 

Dian Fu commented on FLINK-7169:


Hi [~ychen],
Thanks a lot working on this ticket. :)

For the API change, may be it's better to add an API in {{Pattern}}, such as 
{{Pattern.setSkipStrategy(AfterMatchSkipStrategy afterMatchSkipStrategy)}}.

For the implementation of the {{AfterMatchSkipStrategy}}, I have a very rough 
though. For example, for pattern {{a b*}}, if the skip strategy is {{AFTER 
MATCH SKIP TO FIST b}}, we only add a new {{Start}} {{ComputationState}} once 
the first {{b}} is matched (Not add a new {{Start}} {{ComputationState}} once 
the {{Start}} {{ComputationState}} is matched which is the current strategy). 
If this is feasible, we don't need to keep track of the event order any more. 
Thoughts?

> Support AFTER MATCH SKIP function in CEP library API
> 
>
> Key: FLINK-7169
> URL: https://issues.apache.org/jira/browse/FLINK-7169
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Yueting Chen
>Assignee: Yueting Chen
>
> In order to support Oracle's MATCH_RECOGNIZE on top of the CEP library, we 
> need to support AFTER MATCH SKIP function in CEP API.
> There're four options in AFTER MATCH SKIP, listed as follows:
> 1. AFTER MATCH SKIP TO NEXT ROW: resume pattern matching at the row after the 
> first row of the current match.
> 2. AFTER MATCH SKIP PAST LAST ROW: resume pattern matching at the next row 
> after the last row of the current match.
> 3. AFTER MATCH SKIP TO FIST *RPV*: resume pattern matching at the first row 
> that is mapped to the row pattern variable RPV.
> 4. AFTER MATCH SKIP TO LAST *RPV*: resume pattern matching at the last row 
> that is mapped to the row pattern variable RPV.
> I think we can introduce a new function to `CEP` class, which takes a new 
> parameter as AfterMatchSKipStrategy.
> The new API may looks like this
> {code}
> public static  PatternStream pattern(DataStream input, Pattern 
> pattern, AfterMatchSkipStrategy afterMatchSkipStrategy) 
> {code}
> We can also make `SKIP TO NEXT ROW` as the default option, because that's 
> what CEP library behaves currently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6964) Fix recovery for incremental checkpoints in StandaloneCompletedCheckpointStore

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085623#comment-16085623
 ] 

ASF GitHub Bot commented on FLINK-6964:
---

Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/4192
  
@aljoscha @StephanEwen I have added an IT case for this problem. It is 
testing a sequence externalized checkpoint recoveries, using full and 
incremental checkpoints on standalone and zookeeper completed checkpoint store.


> Fix recovery for incremental checkpoints in StandaloneCompletedCheckpointStore
> --
>
> Key: FLINK-6964
> URL: https://issues.apache.org/jira/browse/FLINK-6964
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Blocker
> Fix For: 1.3.2
>
>
> {{StandaloneCompletedCheckpointStore}} does not register shared states ion 
> resume. However, for externalized checkpoints, it register the checkpoint 
> from which it resumed. This checkpoint gets added to the completed checkpoint 
> store as part of resume.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4192: [FLINK-6964] [checkpoint] Fix externalized incremental ch...

2017-07-13 Thread StefanRRichter
Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/4192
  
@aljoscha @StephanEwen I have added an IT case for this problem. It is 
testing a sequence externalized checkpoint recoveries, using full and 
incremental checkpoints on standalone and zookeeper completed checkpoint store.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-6617) Improve JAVA and SCALA logical plans consistent test

2017-07-13 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-6617.
-
   Resolution: Fixed
Fix Version/s: 1.4.0

Fixed in 1.4.0: ecde7bc13c992e81a50d4f9b897ba4840709629c & 
f1fafc0e1e664296e54c3a37e414087cf85c64cd

> Improve JAVA and SCALA logical plans consistent test
> 
>
> Key: FLINK-6617
> URL: https://issues.apache.org/jira/browse/FLINK-6617
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
> Fix For: 1.4.0
>
>
> Currently,we need some `StringExpression` test,for all JAVA and SCALA API.
> Such as:`GroupAggregations`,`GroupWindowAggregaton`(Session,Tumble),`Calc` 
> etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7153) Eager Scheduling can't allocate source for ExecutionGraph correctly

2017-07-13 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085615#comment-16085615
 ] 

Fabian Hueske commented on FLINK-7153:
--

Hi [~sihuazhou], thanks for reporting this issue. Unfortunately, I'm not very 
familiar with Flink scheduler.
I'd recommend to reach out to the dev mailing list to discuss this.

Thanks, Fabian

> Eager Scheduling can't allocate source for ExecutionGraph correctly
> ---
>
> Key: FLINK-7153
> URL: https://issues.apache.org/jira/browse/FLINK-7153
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.3.1
>Reporter: Sihua Zhou
>
> The ExecutionGraph.scheduleEager() function allocate for ExecutionJobVertex 
> one by one via calling ExecutionJobVertex.allocateResourcesForAll(), here is 
> two problem about it:
> 1. The ExecutionVertex.getPreferredLocationsBasedOnInputs will always return 
> empty, cause `sourceSlot` always be null until `ExectionVertex` has been 
> deployed via 'Execution.deployToSlot()'. So allocate resource base on 
> prefered location can't work correctly, we need to set the slot info for 
> `Execution` as soon as Execution.allocateSlotForExecution() called 
> successfully?
> 2. Current allocate strategy can't allocate the slot optimize.  Here is the 
> test case:
> {code}
> JobVertex v1 = new JobVertex("v1", jid1);
> JobVertex v2 = new JobVertex("v2", jid2);
> SlotSharingGroup group = new SlotSharingGroup();
> v1.setSlotSharingGroup(group);
> v2.setSlotSharingGroup(group);
> v1.setParallelism(2);
> v2.setParallelism(4);
> v1.setInvokableClass(BatchTask.class);
> v2.setInvokableClass(BatchTask.class);
> v2.connectNewDataSetAsInput(v1, DistributionPattern.POINTWISE, 
> ResultPartitionType.PIPELINED_BOUNDED);
> {code}
> Currently, after allocate for v1,v2, we got a local partition and three 
> remote partition. But actually, it should be 2 local partition and 2 remote 
> partition. 
> The causes of the above problems is becuase that the current allocate 
> strategy is allocate the resource for execution one by one(if the execution 
> can allocate from SlotGroup than get it, Otherwise ask for a new one for it). 
> If we change the allocate strategy to two step will solve this problem, below 
> is the Pseudo code:
> {code}
> for (ExecutionJobVertex ejv: getVerticesTopologically) {
> //step 1: try to allocate from SlothGroup base on inputs one by one (which 
> only allocate resource base on location).
> //step 2: allocate for the remain execution.
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7173) Fix the illustration of tumbling window.

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085609#comment-16085609
 ] 

ASF GitHub Bot commented on FLINK-7173:
---

GitHub user sunjincheng121 opened a pull request:

https://github.com/apache/flink/pull/4322

[FLINK-7173][doc]Change the illustration of tumbling window.

Change the illustration of tumbling window.
- [ ] General
  - The pull request references the related JIRA issue 
("[FLINK-7173][doc]Change the illustration of tumbling window.")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sunjincheng121/flink FLINK-7173-PR

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4322.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4322


commit fc1f954389010d9f7192f989c20d8b8b87c80244
Author: sunjincheng121 
Date:   2017-07-13T12:04:00Z

[FLINK-7173][doc]Change the illustration of tumbling window.




> Fix the illustration of tumbling window.
> 
>
> Key: FLINK-7173
> URL: https://issues.apache.org/jira/browse/FLINK-7173
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Documentation
>Reporter: sunjincheng
>Assignee: sunjincheng
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> !screenshot-1.png!
> Change it to :
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4322: [FLINK-7173][doc]Change the illustration of tumbli...

2017-07-13 Thread sunjincheng121
GitHub user sunjincheng121 opened a pull request:

https://github.com/apache/flink/pull/4322

[FLINK-7173][doc]Change the illustration of tumbling window.

Change the illustration of tumbling window.
- [ ] General
  - The pull request references the related JIRA issue 
("[FLINK-7173][doc]Change the illustration of tumbling window.")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sunjincheng121/flink FLINK-7173-PR

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4322.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4322


commit fc1f954389010d9f7192f989c20d8b8b87c80244
Author: sunjincheng121 
Date:   2017-07-13T12:04:00Z

[FLINK-7173][doc]Change the illustration of tumbling window.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7174) Bump dependency of Kafka 0.10.x to the latest one

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085608#comment-16085608
 ] 

ASF GitHub Bot commented on FLINK-7174:
---

GitHub user pnowojski opened a pull request:

https://github.com/apache/flink/pull/4321

[FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

This patch fixes also an incompatibility with the latest Kafka 0.10.x and 
0.11.x kafka-clients.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pnowojski/flink kafka010

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4321.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4321


commit e8aac4d3842c433ffc40e36c696950057e5139b9
Author: Piotr Nowojski 
Date:   2017-07-13T11:58:29Z

[FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1




> Bump dependency of Kafka 0.10.x to the latest one
> -
>
> Key: FLINK-7174
> URL: https://issues.apache.org/jira/browse/FLINK-7174
> Project: Flink
>  Issue Type: Improvement
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> We are using pretty old Kafka version for 0.10. Besides any bug fixes and 
> improvements that were made between 0.10.0.1 and 0.10.2.1, it 0.10.2.1 
> version is more similar to 0.11.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2....

2017-07-13 Thread pnowojski
GitHub user pnowojski opened a pull request:

https://github.com/apache/flink/pull/4321

[FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

This patch fixes also an incompatibility with the latest Kafka 0.10.x and 
0.11.x kafka-clients.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pnowojski/flink kafka010

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4321.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4321


commit e8aac4d3842c433ffc40e36c696950057e5139b9
Author: Piotr Nowojski 
Date:   2017-07-13T11:58:29Z

[FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   3   >