[GitHub] [flink] flinkbot edited a comment on issue #9689: [FLINK-7151] add a basic function ddl

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9689: [FLINK-7151] add a basic function ddl
URL: https://github.com/apache/flink/pull/9689#issuecomment-531747685
 
 
   
   ## CI report:
   
   * bd2624914db1147588ea838ae542333c310290cc : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/127790175)
   * b5523d10152123f45cf883e446872b90532879c3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128113059)
   * 7c4c25b26aa9549fe83628315b816f16327000b1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129622336)
   * 9e6b35cb0586839b893211b555192101ffce1a95 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129975363)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9762: !IGNORE! ZK 3.5.5 shaded migration

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9762: !IGNORE! ZK 3.5.5 shaded migration
URL: https://github.com/apache/flink/pull/9762#issuecomment-534689957
 
 
   
   ## CI report:
   
   * cefac57542dbd85a6693f6a1b7c8c90d91e85d30 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128980426)
   * f04ce4e35191731da657f170c6949ba70d87b6e3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129575680)
   * 0250869c17414f0c0047b34fdf6a6ad4b57bf017 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129576444)
   * d2d2ea63507f9e4f798ac96169e4ca11921e4954 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129579078)
   * dac3959dc4798b80edf9636b727dbeeb222c0a1f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129797705)
   * 90c244309caca6e54865705425aa9b799d479f40 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129964935)
   * aa6f893280b6a46f6a5ace64e1b1362e64b889b5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129973839)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14310) Get ExecutionVertexID from ExecutionVertex rather than creating new instances

2019-10-01 Thread Zhu Zhu (Jira)
Zhu Zhu created FLINK-14310:
---

 Summary: Get ExecutionVertexID from ExecutionVertex rather than 
creating new instances
 Key: FLINK-14310
 URL: https://issues.apache.org/jira/browse/FLINK-14310
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.10.0
Reporter: Zhu Zhu
 Fix For: 1.10.0


{{ExecutionVertexID}} is now added as a field to {{ExecutionVertex}}.

Many components, however, are still creating {{ExecutionVertexID}} from 
{{ExecutionVertex}} by themselves.
We should change them to use the field {{ExecutionVertex#executionVertexID}} 
directly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9689: [FLINK-7151] add a basic function ddl

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9689: [FLINK-7151] add a basic function ddl
URL: https://github.com/apache/flink/pull/9689#issuecomment-531747685
 
 
   
   ## CI report:
   
   * bd2624914db1147588ea838ae542333c310290cc : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/127790175)
   * b5523d10152123f45cf883e446872b90532879c3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128113059)
   * 7c4c25b26aa9549fe83628315b816f16327000b1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129622336)
   * 9e6b35cb0586839b893211b555192101ffce1a95 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129975363)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14309) Kafka09ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink fails on Travis

2019-10-01 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-14309:
-

 Summary: 
Kafka09ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink 
fails on Travis
 Key: FLINK-14309
 URL: https://issues.apache.org/jira/browse/FLINK-14309
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.10.0
Reporter: Till Rohrmann
 Fix For: 1.10.0


The 
{{Kafka09ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink}}
 fails on Travis with 

{code}
Test 
testOneToOneAtLeastOnceRegularSink(org.apache.flink.streaming.connectors.kafka.Kafka09ProducerITCase)
 failed with:
java.lang.AssertionError: Job should fail!
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnce(KafkaProducerTestBase.java:280)
at 
org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink(KafkaProducerTestBase.java:206)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}

https://api.travis-ci.com/v3/job/240747188/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14308) Kafka09ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator fails on Travis

2019-10-01 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16942514#comment-16942514
 ] 

Till Rohrmann commented on FLINK-14308:
---

Might be related.

> Kafka09ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator
>  fails on Travis
> -
>
> Key: FLINK-14308
> URL: https://issues.apache.org/jira/browse/FLINK-14308
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The 
> {{Kafka09ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator}}
>  fails on Travis with
> {code}
> Test 
> testOneToOneAtLeastOnceCustomOperator(org.apache.flink.streaming.connectors.kafka.Kafka09ProducerITCase)
>  failed with:
> java.lang.AssertionError: Expected to contain all of: <[0]>, but was: <[]>
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.assertAtLeastOnceForTopic(KafkaTestBase.java:235)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnce(KafkaProducerTestBase.java:289)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator(KafkaProducerTestBase.java:214)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
> https://api.travis-ci.com/v3/job/240747188/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14308) Kafka09ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator fails on Travis

2019-10-01 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-14308:
-

 Summary: 
Kafka09ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator
 fails on Travis
 Key: FLINK-14308
 URL: https://issues.apache.org/jira/browse/FLINK-14308
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.10.0
Reporter: Till Rohrmann
 Fix For: 1.10.0


The 
{{Kafka09ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator}}
 fails on Travis with

{code}
Test 
testOneToOneAtLeastOnceCustomOperator(org.apache.flink.streaming.connectors.kafka.Kafka09ProducerITCase)
 failed with:
java.lang.AssertionError: Expected to contain all of: <[0]>, but was: <[]>
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.flink.streaming.connectors.kafka.KafkaTestBase.assertAtLeastOnceForTopic(KafkaTestBase.java:235)
at 
org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnce(KafkaProducerTestBase.java:289)
at 
org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator(KafkaProducerTestBase.java:214)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}

https://api.travis-ci.com/v3/job/240747188/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14224) Kafka010ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator fails on Travis

2019-10-01 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-14224:
-

Assignee: Alex

> Kafka010ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator
>  fails on Travis
> --
>
> Key: FLINK-14224
> URL: https://issues.apache.org/jira/browse/FLINK-14224
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Assignee: Alex
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The 
> {{Kafka010ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator}}
>  fails on Travis with
> {code}
> Test 
> testOneToOneAtLeastOnceCustomOperator(org.apache.flink.streaming.connectors.kafka.Kafka010ProducerITCase)
>  failed with:
> java.lang.AssertionError: Job should fail!
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnce(KafkaProducerTestBase.java:280)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator(KafkaProducerTestBase.java:214)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
> https://api.travis-ci.com/v3/job/238920411/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14224) Kafka010ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator fails on Travis

2019-10-01 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16942512#comment-16942512
 ] 

Till Rohrmann commented on FLINK-14224:
---

That would be great [~1u0]. I've assigned you to this issue. Thanks a lot!

> Kafka010ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator
>  fails on Travis
> --
>
> Key: FLINK-14224
> URL: https://issues.apache.org/jira/browse/FLINK-14224
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The 
> {{Kafka010ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator}}
>  fails on Travis with
> {code}
> Test 
> testOneToOneAtLeastOnceCustomOperator(org.apache.flink.streaming.connectors.kafka.Kafka010ProducerITCase)
>  failed with:
> java.lang.AssertionError: Job should fail!
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnce(KafkaProducerTestBase.java:280)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator(KafkaProducerTestBase.java:214)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
> https://api.travis-ci.com/v3/job/238920411/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9689: [FLINK-7151] add a basic function ddl

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9689: [FLINK-7151] add a basic function ddl
URL: https://github.com/apache/flink/pull/9689#issuecomment-531747685
 
 
   
   ## CI report:
   
   * bd2624914db1147588ea838ae542333c310290cc : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/127790175)
   * b5523d10152123f45cf883e446872b90532879c3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128113059)
   * 7c4c25b26aa9549fe83628315b816f16327000b1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129622336)
   * 9e6b35cb0586839b893211b555192101ffce1a95 : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9762: !IGNORE! ZK 3.5.5 shaded migration

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9762: !IGNORE! ZK 3.5.5 shaded migration
URL: https://github.com/apache/flink/pull/9762#issuecomment-534689957
 
 
   
   ## CI report:
   
   * cefac57542dbd85a6693f6a1b7c8c90d91e85d30 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128980426)
   * f04ce4e35191731da657f170c6949ba70d87b6e3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129575680)
   * 0250869c17414f0c0047b34fdf6a6ad4b57bf017 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129576444)
   * d2d2ea63507f9e4f798ac96169e4ca11921e4954 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129579078)
   * dac3959dc4798b80edf9636b727dbeeb222c0a1f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129797705)
   * 90c244309caca6e54865705425aa9b799d479f40 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129964935)
   * aa6f893280b6a46f6a5ace64e1b1362e64b889b5 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129973839)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9762: !IGNORE! ZK 3.5.5 shaded migration

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9762: !IGNORE! ZK 3.5.5 shaded migration
URL: https://github.com/apache/flink/pull/9762#issuecomment-534689957
 
 
   
   ## CI report:
   
   * cefac57542dbd85a6693f6a1b7c8c90d91e85d30 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128980426)
   * f04ce4e35191731da657f170c6949ba70d87b6e3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129575680)
   * 0250869c17414f0c0047b34fdf6a6ad4b57bf017 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129576444)
   * d2d2ea63507f9e4f798ac96169e4ca11921e4954 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129579078)
   * dac3959dc4798b80edf9636b727dbeeb222c0a1f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129797705)
   * 90c244309caca6e54865705425aa9b799d479f40 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129964935)
   * aa6f893280b6a46f6a5ace64e1b1362e64b889b5 : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9762: !IGNORE! ZK 3.5.5 shaded migration

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9762: !IGNORE! ZK 3.5.5 shaded migration
URL: https://github.com/apache/flink/pull/9762#issuecomment-534689957
 
 
   
   ## CI report:
   
   * cefac57542dbd85a6693f6a1b7c8c90d91e85d30 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128980426)
   * f04ce4e35191731da657f170c6949ba70d87b6e3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129575680)
   * 0250869c17414f0c0047b34fdf6a6ad4b57bf017 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129576444)
   * d2d2ea63507f9e4f798ac96169e4ca11921e4954 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129579078)
   * dac3959dc4798b80edf9636b727dbeeb222c0a1f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129797705)
   * 90c244309caca6e54865705425aa9b799d479f40 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129964935)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9774: [hotfix][tools] Update comments in verify_scala_suffixes.sh

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9774: [hotfix][tools] Update comments in 
verify_scala_suffixes.sh
URL: https://github.com/apache/flink/pull/9774#issuecomment-535311583
 
 
   
   ## CI report:
   
   * b863a8fcbbf97650009e0947b10f283e6954ace3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129204536)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9762: !IGNORE! ZK 3.5.5 shaded migration

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9762: !IGNORE! ZK 3.5.5 shaded migration
URL: https://github.com/apache/flink/pull/9762#issuecomment-534689957
 
 
   
   ## CI report:
   
   * cefac57542dbd85a6693f6a1b7c8c90d91e85d30 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128980426)
   * f04ce4e35191731da657f170c6949ba70d87b6e3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129575680)
   * 0250869c17414f0c0047b34fdf6a6ad4b57bf017 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129576444)
   * d2d2ea63507f9e4f798ac96169e4ca11921e4954 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129579078)
   * dac3959dc4798b80edf9636b727dbeeb222c0a1f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129797705)
   * 90c244309caca6e54865705425aa9b799d479f40 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129964935)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9781: [FLINK-14098] Support multiple statements for TableEnvironment.

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9781: [FLINK-14098] Support multiple 
statements for TableEnvironment.
URL: https://github.com/apache/flink/pull/9781#issuecomment-535813967
 
 
   
   ## CI report:
   
   * ed0e04d1cd481b096d68d0c9bf36a997dcca2550 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129400381)
   * ec8fccdeb53c5e2e0f089708753be33839706e2a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129441309)
   * ad27f8e71d91534867d093f00cde3d5b81534282 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129526289)
   * 5247e6b03266d0d25d42a185e0a447643116d96f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129545862)
   * bdad4f2c28154dd4d9d4ab05b9202c47644d4c9f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129573368)
   * 0b54672d7eb381ac2cc422dd2da14518fa4d70d2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129666089)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9762: !IGNORE! ZK 3.5.5 shaded migration

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9762: !IGNORE! ZK 3.5.5 shaded migration
URL: https://github.com/apache/flink/pull/9762#issuecomment-534689957
 
 
   
   ## CI report:
   
   * cefac57542dbd85a6693f6a1b7c8c90d91e85d30 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128980426)
   * f04ce4e35191731da657f170c6949ba70d87b6e3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129575680)
   * 0250869c17414f0c0047b34fdf6a6ad4b57bf017 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129576444)
   * d2d2ea63507f9e4f798ac96169e4ca11921e4954 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129579078)
   * dac3959dc4798b80edf9636b727dbeeb222c0a1f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129797705)
   * 90c244309caca6e54865705425aa9b799d479f40 : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9774: [hotfix][tools] Update comments in verify_scala_suffixes.sh

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9774: [hotfix][tools] Update comments in 
verify_scala_suffixes.sh
URL: https://github.com/apache/flink/pull/9774#issuecomment-535311583
 
 
   
   ## CI report:
   
   * b863a8fcbbf97650009e0947b10f283e6954ace3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129204536)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-14123) Change taskmanager.memory.fraction default value to 0.6

2019-10-01 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16942428#comment-16942428
 ] 

Xintong Song edited comment on FLINK-14123 at 10/2/19 2:00 AM:
---

[~StephanEwen], thanks for pulling me in.

Actually, there is a discussion regarding whether we should be backwards 
compatible on 'taskmanager.memory.fraction', in this 
[PR|https://github.com/apache/flink/pull/9760]. The problem is that, the 
definition of managed memory fraction becomes different.
 * In current codebase, 'taskmanager.memory.fraction' = managed memory / (total 
flink memory - network memory).
 * According to FLIP-49, the new config option 
'taskmanager.memory.managed.fraction' = managed memory / total flink memory.

If we support backwards compatibility on key 'taskmanager.memory.fraction', 
then it could be confusing for the user that the same configured fraction 
actually leads to different managed memory size. Maybe not supporting the 
legacy fraction could be a good way to draw attention of users on the 
definition change.

If we decide not supporting the legacy fraction, then there would be no 
necessary on changing the default value. If we decide to be backwards 
compatible on this, then it makes more sense to decrease the default value 
because of the definition changes. Actually, the default value of the new 
fraction according to FLIP-49 is 0.5.


was (Author: xintongsong):
[~StephanEwen], thanks for pulling me in.

Actually, there is a discussion regarding whether we should be backwards 
compatible on 'taskmanager.memory.fraction', in this 
[PR|[https://github.com/apache/flink/pull/9760]]. The problem is that, the 
definition of managed memory fraction becomes different.
 * In current codebase, 'taskmanager.memory.fraction' = managed memory / (total 
flink memory - network memory).
 * According to FLIP-49, the new config option 
'taskmanager.memory.managed.fraction' = managed memory / total flink memory.

If we support backwards compatibility on key 'taskmanager.memory.fraction', 
then it could be confusing for the user that the same configured fraction 
actually leads to different managed memory size. Maybe not supporting the 
legacy fraction could be a good way to draw attention of users on the 
definition change.

If we decide not supporting the legacy fraction, then there would be no 
necessary on changing the default value. If we decide to be backwards 
compatible on this, then it makes more sense to decrease the default value 
because of the definition changes. Actually, the default value of the new 
fraction according to FLIP-49 is 0.5.

> Change taskmanager.memory.fraction default value to 0.6
> ---
>
> Key: FLINK-14123
> URL: https://issues.apache.org/jira/browse/FLINK-14123
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Affects Versions: 1.9.0
>Reporter: liupengcheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, we are testing flink batch task, such as terasort, however, it 
> started only awhile then it failed due to OOM. 
>  
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: a807e1d635bd4471ceea4282477f8850)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:262)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:326)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:539)
>   at 
> com.github.ehiggs.spark.terasort.FlinkTeraSort$.main(FlinkTeraSort.scala:89)
>   at 
> com.github.ehiggs.spark.terasort.FlinkTeraSort.main(FlinkTeraSort.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:604)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:466)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   

[GitHub] [flink] flinkbot edited a comment on issue #9781: [FLINK-14098] Support multiple statements for TableEnvironment.

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9781: [FLINK-14098] Support multiple 
statements for TableEnvironment.
URL: https://github.com/apache/flink/pull/9781#issuecomment-535813967
 
 
   
   ## CI report:
   
   * ed0e04d1cd481b096d68d0c9bf36a997dcca2550 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129400381)
   * ec8fccdeb53c5e2e0f089708753be33839706e2a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129441309)
   * ad27f8e71d91534867d093f00cde3d5b81534282 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129526289)
   * 5247e6b03266d0d25d42a185e0a447643116d96f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129545862)
   * bdad4f2c28154dd4d9d4ab05b9202c47644d4c9f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129573368)
   * 0b54672d7eb381ac2cc422dd2da14518fa4d70d2 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129666089)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-14123) Change taskmanager.memory.fraction default value to 0.6

2019-10-01 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16942428#comment-16942428
 ] 

Xintong Song edited comment on FLINK-14123 at 10/2/19 1:59 AM:
---

[~StephanEwen], thanks for pulling me in.

Actually, there is a discussion regarding whether we should be backwards 
compatible on 'taskmanager.memory.fraction', in this 
[PR|[https://github.com/apache/flink/pull/9760]]. The problem is that, the 
definition of managed memory fraction becomes different.
 * In current codebase, 'taskmanager.memory.fraction' = managed memory / (total 
flink memory - network memory).
 * According to FLIP-49, the new config option 
'taskmanager.memory.managed.fraction' = managed memory / total flink memory.

If we support backwards compatibility on key 'taskmanager.memory.fraction', 
then it could be confusing for the user that the same configured fraction 
actually leads to different managed memory size. Maybe not supporting the 
legacy fraction could be a good way to draw attention of users on the 
definition change.

If we decide not supporting the legacy fraction, then there would be no 
necessary on changing the default value. If we decide to be backwards 
compatible on this, then it makes more sense to decrease the default value 
because of the definition changes. Actually, the default value of the new 
fraction according to FLIP-49 is 0.5.


was (Author: xintongsong):
[~StephanEwen],

I'm not sure about this.

 

Actually, there is a discussion regarding whether we should be backwards 
compatible on 'taskmanager.memory.fraction', in this 
[PR|[https://github.com/apache/flink/pull/9760]]. The problem is that, the 
definition of managed memory fraction becomes different.
 * In current codebase, 'taskmanager.memory.fraction' = managed memory / (total 
flink memory - network memory).
 * According to FLIP-49, the new config option 
'taskmanager.memory.managed.fraction' = managed memory / total flink memory.

If we support backwards compatibility on key 'taskmanager.memory.fraction', 
then it could be confusing for the user that the same configured fraction 
actually leads to different managed memory size. Maybe not supporting the 
legacy fraction could be a good way to draw attention of users on the 
definition change.

 

If we decide not supporting the legacy fraction, then there would be no 
necessary on changing the default value. If we decide to be backwards 
compatible on this, then it makes more sense to decrease the default value 
because of the definition changes. Actually, the default value of the new 
fraction according to FLIP-49 is 0.5.

> Change taskmanager.memory.fraction default value to 0.6
> ---
>
> Key: FLINK-14123
> URL: https://issues.apache.org/jira/browse/FLINK-14123
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Affects Versions: 1.9.0
>Reporter: liupengcheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, we are testing flink batch task, such as terasort, however, it 
> started only awhile then it failed due to OOM. 
>  
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: a807e1d635bd4471ceea4282477f8850)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:262)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:326)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:539)
>   at 
> com.github.ehiggs.spark.terasort.FlinkTeraSort$.main(FlinkTeraSort.scala:89)
>   at 
> com.github.ehiggs.spark.terasort.FlinkTeraSort.main(FlinkTeraSort.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:604)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:466)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:2

[GitHub] [flink] TisonKun commented on issue #9774: [hotfix][tools] Update comments in verify_scala_suffixes.sh

2019-10-01 Thread GitBox
TisonKun commented on issue #9774: [hotfix][tools] Update comments in 
verify_scala_suffixes.sh
URL: https://github.com/apache/flink/pull/9774#issuecomment-537302703
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14123) Change taskmanager.memory.fraction default value to 0.6

2019-10-01 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16942428#comment-16942428
 ] 

Xintong Song commented on FLINK-14123:
--

[~StephanEwen],

I'm not sure about this.

 

Actually, there is a discussion regarding whether we should be backwards 
compatible on 'taskmanager.memory.fraction', in this 
[PR|[https://github.com/apache/flink/pull/9760]]. The problem is that, the 
definition of managed memory fraction becomes different.
 * In current codebase, 'taskmanager.memory.fraction' = managed memory / (total 
flink memory - network memory).
 * According to FLIP-49, the new config option 
'taskmanager.memory.managed.fraction' = managed memory / total flink memory.

If we support backwards compatibility on key 'taskmanager.memory.fraction', 
then it could be confusing for the user that the same configured fraction 
actually leads to different managed memory size. Maybe not supporting the 
legacy fraction could be a good way to draw attention of users on the 
definition change.

 

If we decide not supporting the legacy fraction, then there would be no 
necessary on changing the default value. If we decide to be backwards 
compatible on this, then it makes more sense to decrease the default value 
because of the definition changes. Actually, the default value of the new 
fraction according to FLIP-49 is 0.5.

> Change taskmanager.memory.fraction default value to 0.6
> ---
>
> Key: FLINK-14123
> URL: https://issues.apache.org/jira/browse/FLINK-14123
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Affects Versions: 1.9.0
>Reporter: liupengcheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, we are testing flink batch task, such as terasort, however, it 
> started only awhile then it failed due to OOM. 
>  
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: a807e1d635bd4471ceea4282477f8850)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:262)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:326)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:539)
>   at 
> com.github.ehiggs.spark.terasort.FlinkTeraSort$.main(FlinkTeraSort.scala:89)
>   at 
> com.github.ehiggs.spark.terasort.FlinkTeraSort.main(FlinkTeraSort.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:604)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:466)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1007)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1080)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1080)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:259)
>   ... 23 more
> Caused by: java.lang.RuntimeException: Error obtaining the sorted input: 
> Thread 'SortMerger Reading Thread' terminated due to an exception: GC 
> overhead limit exceeded
>   at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(Un

[GitHub] [flink] zhengcanbin commented on issue #9781: [FLINK-14098] Support multiple statements for TableEnvironment.

2019-10-01 Thread GitBox
zhengcanbin commented on issue #9781: [FLINK-14098] Support multiple statements 
for TableEnvironment.
URL: https://github.com/apache/flink/pull/9781#issuecomment-537300882
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL

2019-10-01 Thread GitBox
sjwiesman commented on a change in pull request #9799: 
[FLINK-13360][documentation] Add documentation for HBase connector for Table 
API & SQL
URL: https://github.com/apache/flink/pull/9799#discussion_r330262955
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -1075,6 +1075,72 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from an HBase cluster.
+The HBase connector allows for writing into an HBase cluster.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+
+  org.apache.flink
+  flink-connector-hbase{{ site.scala_version_suffix }}
+  {{site.version }}
+
+{% endhighlight %}
+
+The connector can be defined as follows:
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  hbase_rowkey_name rowkey_type,
+  hbase_column_family_name1 ROW<...>,
+  hbase_column_family_name2 ROW<...>
+) WITH (
+  'connector.type' = 'hbase', -- required: specify this table type is hbase
 
 Review comment:
   Does HBase not support java/scala, python or yaml? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL

2019-10-01 Thread GitBox
sjwiesman commented on a change in pull request #9799: 
[FLINK-13360][documentation] Add documentation for HBase connector for Table 
API & SQL
URL: https://github.com/apache/flink/pull/9799#discussion_r330264226
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -1075,6 +1075,72 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from an HBase cluster.
+The HBase connector allows for writing into an HBase cluster.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+
+  org.apache.flink
+  flink-connector-hbase{{ site.scala_version_suffix }}
+  {{site.version }}
+
+{% endhighlight %}
+
+The connector can be defined as follows:
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  hbase_rowkey_name rowkey_type,
+  hbase_column_family_name1 ROW<...>,
+  hbase_column_family_name2 ROW<...>
+) WITH (
+  'connector.type' = 'hbase', -- required: specify this table type is hbase
 
 Review comment:
   If so, we should open a follow up ticket to make this change everywhere. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL

2019-10-01 Thread GitBox
sjwiesman commented on a change in pull request #9799: 
[FLINK-13360][documentation] Add documentation for HBase connector for Table 
API & SQL
URL: https://github.com/apache/flink/pull/9799#discussion_r330264226
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -1075,6 +1075,72 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from an HBase cluster.
+The HBase connector allows for writing into an HBase cluster.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+
+  org.apache.flink
+  flink-connector-hbase{{ site.scala_version_suffix }}
+  {{site.version }}
+
+{% endhighlight %}
+
+The connector can be defined as follows:
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  hbase_rowkey_name rowkey_type,
+  hbase_column_family_name1 ROW<...>,
+  hbase_column_family_name2 ROW<...>
+) WITH (
+  'connector.type' = 'hbase', -- required: specify this table type is hbase
 
 Review comment:
   If so, we should open a follow up ticket to make this change everywhere. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL

2019-10-01 Thread GitBox
sjwiesman commented on a change in pull request #9799: 
[FLINK-13360][documentation] Add documentation for HBase connector for Table 
API & SQL
URL: https://github.com/apache/flink/pull/9799#discussion_r330264226
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -1075,6 +1075,72 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from an HBase cluster.
+The HBase connector allows for writing into an HBase cluster.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+
+  org.apache.flink
+  flink-connector-hbase{{ site.scala_version_suffix }}
+  {{site.version }}
+
+{% endhighlight %}
+
+The connector can be defined as follows:
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  hbase_rowkey_name rowkey_type,
+  hbase_column_family_name1 ROW<...>,
+  hbase_column_family_name2 ROW<...>
+) WITH (
+  'connector.type' = 'hbase', -- required: specify this table type is hbase
 
 Review comment:
   If so this is a change we would want to make everywhere. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL

2019-10-01 Thread GitBox
sjwiesman commented on a change in pull request #9799: 
[FLINK-13360][documentation] Add documentation for HBase connector for Table 
API & SQL
URL: https://github.com/apache/flink/pull/9799#discussion_r330257639
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -1075,6 +1075,72 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from an HBase cluster.
+The HBase connector allows for writing into an HBase cluster.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
 
 Review comment:
   ```suggestion
   For append-only queries, the connector can also operate in [append 
mode](#update-modes) by exeucting INSERT statements.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL

2019-10-01 Thread GitBox
sjwiesman commented on a change in pull request #9799: 
[FLINK-13360][documentation] Add documentation for HBase connector for Table 
API & SQL
URL: https://github.com/apache/flink/pull/9799#discussion_r330259007
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -1075,6 +1075,72 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from an HBase cluster.
+The HBase connector allows for writing into an HBase cluster.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+
+  org.apache.flink
+  flink-connector-hbase{{ site.scala_version_suffix }}
+  {{site.version }}
+
+{% endhighlight %}
+
+The connector can be defined as follows:
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  hbase_rowkey_name rowkey_type,
+  hbase_column_family_name1 ROW<...>,
+  hbase_column_family_name2 ROW<...>
+) WITH (
+  'connector.type' = 'hbase', -- required: specify this table type is hbase
+  
+  'connector.version' = '1.4.3',  -- required: valid connector 
versions are "1.4.3"
+  
+  'connector.table-name' = 'hbase_table_name',  -- required: hbase table name
+  
+  'connector.zookeeper.quorum' = 'quorum_url', -- required: hbase zookeeper 
config
+  'connector.zookeeper.znode.parent' = 'znode',
+
+  'connector.write.buffer-flush.max-size' = '1048576', -- optional: Write 
option, sets when to flush a buffered request
+   -- based on the memory 
size of rows currently added.
+
+  'connector.write.buffer-flush.max-rows' = '1', -- optional: Write option, 
sets when to flush buffered 
+-- request based on the 
number of rows currently added.
+
+  'connector.write.buffer-flush.interval' = '1', -- optional: Write option, 
sets a flush interval flushing buffered 
+ -- requesting if the interval 
passes, in milliseconds.
+)
+{% endhighlight %}
+
+
+
+**Column family:** Values other than rowKey must be declared by column 
families. So need to wrap values with the SQL ROW function before inserting 
into hbase table.
 
 Review comment:
   ```suggestion
   **Column family:** Values other than `rowKey` must be declared as column 
families, and all column family values must be wrapped with the SQL ROW 
function before being inserted into HBase table.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL

2019-10-01 Thread GitBox
sjwiesman commented on a change in pull request #9799: 
[FLINK-13360][documentation] Add documentation for HBase connector for Table 
API & SQL
URL: https://github.com/apache/flink/pull/9799#discussion_r330257845
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -1075,6 +1075,72 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from an HBase cluster.
+The HBase connector allows for writing into an HBase cluster.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+
+  org.apache.flink
+  flink-connector-hbase{{ site.scala_version_suffix }}
+  {{site.version }}
 
 Review comment:
   nit: 
   ```suggestion
 {{ site.version }}
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL

2019-10-01 Thread GitBox
sjwiesman commented on a change in pull request #9799: 
[FLINK-13360][documentation] Add documentation for HBase connector for Table 
API & SQL
URL: https://github.com/apache/flink/pull/9799#discussion_r330260775
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -1075,6 +1075,72 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from an HBase cluster.
+The HBase connector allows for writing into an HBase cluster.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+
+  org.apache.flink
+  flink-connector-hbase{{ site.scala_version_suffix }}
+  {{site.version }}
+
+{% endhighlight %}
+
+The connector can be defined as follows:
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  hbase_rowkey_name rowkey_type,
+  hbase_column_family_name1 ROW<...>,
+  hbase_column_family_name2 ROW<...>
+) WITH (
+  'connector.type' = 'hbase', -- required: specify this table type is hbase
+  
+  'connector.version' = '1.4.3',  -- required: valid connector 
versions are "1.4.3"
+  
+  'connector.table-name' = 'hbase_table_name',  -- required: hbase table name
+  
+  'connector.zookeeper.quorum' = 'quorum_url', -- required: hbase zookeeper 
config
+  'connector.zookeeper.znode.parent' = 'znode',
+
+  'connector.write.buffer-flush.max-size' = '1048576', -- optional: Write 
option, sets when to flush a buffered request
+   -- based on the memory 
size of rows currently added.
+
+  'connector.write.buffer-flush.max-rows' = '1', -- optional: Write option, 
sets when to flush buffered 
+-- request based on the 
number of rows currently added.
+
+  'connector.write.buffer-flush.interval' = '1', -- optional: Write option, 
sets a flush interval flushing buffered 
+ -- requesting if the interval 
passes, in milliseconds.
+)
+{% endhighlight %}
+
+
+
+**Column family:** Values other than rowKey must be declared by column 
families. So need to wrap values with the SQL ROW function before inserting 
into hbase table.
+
+**HBase config:** If need to configure Config for HBase, create default 
configuration from current runtime env (`hbase-site.xml` in classpath) first, 
and overwrite configuration using serialized configuration from client-side env 
(`hbase-site.xml` in classpath).
+
+**Temporary join:** The Lookup Join of HBase does not use any caching, and 
every time the data is accessed directly to the client Api of HBase.
 
 Review comment:
   ```suggestion
   **Temporary join:** Lookup join against HBase do not use any caching; data 
is always queired directly through the HBase client.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL

2019-10-01 Thread GitBox
sjwiesman commented on a change in pull request #9799: 
[FLINK-13360][documentation] Add documentation for HBase connector for Table 
API & SQL
URL: https://github.com/apache/flink/pull/9799#discussion_r330256171
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -1075,6 +1075,72 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from an HBase cluster.
+The HBase connector allows for writing into an HBase cluster.
 
 Review comment:
   This can be one sentence; 
   
   The HBase connector allows for reading from and writing to an HBase cluster.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL

2019-10-01 Thread GitBox
sjwiesman commented on a change in pull request #9799: 
[FLINK-13360][documentation] Add documentation for HBase connector for Table 
API & SQL
URL: https://github.com/apache/flink/pull/9799#discussion_r330262955
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -1075,6 +1075,72 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from an HBase cluster.
+The HBase connector allows for writing into an HBase cluster.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+
+  org.apache.flink
+  flink-connector-hbase{{ site.scala_version_suffix }}
+  {{site.version }}
+
+{% endhighlight %}
+
+The connector can be defined as follows:
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  hbase_rowkey_name rowkey_type,
+  hbase_column_family_name1 ROW<...>,
+  hbase_column_family_name2 ROW<...>
+) WITH (
+  'connector.type' = 'hbase', -- required: specify this table type is hbase
 
 Review comment:
   I think the DDL properties would be much clearer as a table and only show 
create table statements in examples. @twalthr and @dawidwys what do you think? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL

2019-10-01 Thread GitBox
sjwiesman commented on a change in pull request #9799: 
[FLINK-13360][documentation] Add documentation for HBase connector for Table 
API & SQL
URL: https://github.com/apache/flink/pull/9799#discussion_r330262148
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -1075,6 +1075,72 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from an HBase cluster.
+The HBase connector allows for writing into an HBase cluster.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+
+  org.apache.flink
+  flink-connector-hbase{{ site.scala_version_suffix }}
+  {{site.version }}
+
+{% endhighlight %}
+
+The connector can be defined as follows:
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  hbase_rowkey_name rowkey_type,
+  hbase_column_family_name1 ROW<...>,
+  hbase_column_family_name2 ROW<...>
+) WITH (
+  'connector.type' = 'hbase', -- required: specify this table type is hbase
+  
+  'connector.version' = '1.4.3',  -- required: valid connector 
versions are "1.4.3"
+  
+  'connector.table-name' = 'hbase_table_name',  -- required: hbase table name
+  
+  'connector.zookeeper.quorum' = 'quorum_url', -- required: hbase zookeeper 
config
+  'connector.zookeeper.znode.parent' = 'znode',
+
+  'connector.write.buffer-flush.max-size' = '1048576', -- optional: Write 
option, sets when to flush a buffered request
+   -- based on the memory 
size of rows currently added.
+
+  'connector.write.buffer-flush.max-rows' = '1', -- optional: Write option, 
sets when to flush buffered 
+-- request based on the 
number of rows currently added.
+
+  'connector.write.buffer-flush.interval' = '1', -- optional: Write option, 
sets a flush interval flushing buffered 
+ -- requesting if the interval 
passes, in milliseconds.
+)
+{% endhighlight %}
+
+
+
+**Column family:** Values other than rowKey must be declared by column 
families. So need to wrap values with the SQL ROW function before inserting 
into hbase table.
+
+**HBase config:** If need to configure Config for HBase, create default 
configuration from current runtime env (`hbase-site.xml` in classpath) first, 
and overwrite configuration using serialized configuration from client-side env 
(`hbase-site.xml` in classpath).
 
 Review comment:
   I found this sentence very confusing and I'm not entirely sure what you are 
trying to communicate. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL

2019-10-01 Thread GitBox
sjwiesman commented on a change in pull request #9799: 
[FLINK-13360][documentation] Add documentation for HBase connector for Table 
API & SQL
URL: https://github.com/apache/flink/pull/9799#discussion_r330257331
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -1075,6 +1075,72 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from an HBase cluster.
+The HBase connector allows for writing into an HBase cluster.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
 
 Review comment:
   ```suggestion
   The connector can operate in [upsert mode](#update-modes) by executing 
UPSERT and DELETE statements on the cluster using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9799: [FLINK-13360][documentation] Add documentation for HBase connector for Table API & SQL

2019-10-01 Thread GitBox
sjwiesman commented on a change in pull request #9799: 
[FLINK-13360][documentation] Add documentation for HBase connector for Table 
API & SQL
URL: https://github.com/apache/flink/pull/9799#discussion_r330260287
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -1075,6 +1075,72 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from an HBase cluster.
+The HBase connector allows for writing into an HBase cluster.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+
+  org.apache.flink
+  flink-connector-hbase{{ site.scala_version_suffix }}
+  {{site.version }}
+
+{% endhighlight %}
+
+The connector can be defined as follows:
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  hbase_rowkey_name rowkey_type,
+  hbase_column_family_name1 ROW<...>,
+  hbase_column_family_name2 ROW<...>
+) WITH (
+  'connector.type' = 'hbase', -- required: specify this table type is hbase
+  
+  'connector.version' = '1.4.3',  -- required: valid connector 
versions are "1.4.3"
+  
+  'connector.table-name' = 'hbase_table_name',  -- required: hbase table name
+  
+  'connector.zookeeper.quorum' = 'quorum_url', -- required: hbase zookeeper 
config
+  'connector.zookeeper.znode.parent' = 'znode',
+
+  'connector.write.buffer-flush.max-size' = '1048576', -- optional: Write 
option, sets when to flush a buffered request
+   -- based on the memory 
size of rows currently added.
+
+  'connector.write.buffer-flush.max-rows' = '1', -- optional: Write option, 
sets when to flush buffered 
+-- request based on the 
number of rows currently added.
+
+  'connector.write.buffer-flush.interval' = '1', -- optional: Write option, 
sets a flush interval flushing buffered 
+ -- requesting if the interval 
passes, in milliseconds.
+)
+{% endhighlight %}
+
+
+
+**Column family:** Values other than rowKey must be declared by column 
families. So need to wrap values with the SQL ROW function before inserting 
into hbase table.
+
+**HBase config:** If need to configure Config for HBase, create default 
configuration from current runtime env (`hbase-site.xml` in classpath) first, 
and overwrite configuration using serialized configuration from client-side env 
(`hbase-site.xml` in classpath).
+
+**Temporary join:** The Lookup Join of HBase does not use any caching, and 
every time the data is accessed directly to the client Api of HBase.
+
+**Rowkey:** User should confirm rowkey should not be empty string. (waiting 
for support)
 
 Review comment:
   ```suggestion
   **Rowkey:** Empty string rowkey values are currently unsupported.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10672) Task stuck while writing output to flink

2019-10-01 Thread Ankur Goenka (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16942291#comment-16942291
 ] 

Ankur Goenka commented on FLINK-10672:
--

I don't have the original graph as the code has changed.

But the problem still exists when running the following steps with parallelism 
12.

 

Setup flink using

[https://github.com/tensorflow/tfx/blob/master/tfx/examples/chicago_taxi/setup_beam_on_flink.sh]

 

Run the pipeline
 # 
[https://github.com/tensorflow/tfx/blob/master/tfx/examples/chicago_taxi/tfdv_analyze_and_validate_portable_beam.sh]
 # 
[https://github.com/tensorflow/tfx/blob/master/tfx/examples/chicago_taxi/preprocess_portable_beam.sh]

 

You can view the graph by just running it with parallelism 1.

 

 

> Task stuck while writing output to flink
> 
>
> Key: FLINK-10672
> URL: https://issues.apache.org/jira/browse/FLINK-10672
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.5.4
> Environment: OS: Debuan rodente 4.17
> Flink version: 1.5.4
> ||Key||Value||
> |jobmanager.heap.mb|1024|
> |jobmanager.rpc.address|localhost|
> |jobmanager.rpc.port|6123|
> |metrics.reporter.jmx.class|org.apache.flink.metrics.jmx.JMXReporter|
> |metrics.reporter.jmx.port|9250-9260|
> |metrics.reporters|jmx|
> |parallelism.default|1|
> |rest.port|8081|
> |taskmanager.heap.mb|1024|
> |taskmanager.numberOfTaskSlots|1|
> |web.tmpdir|/tmp/flink-web-bdb73d6c-5b9e-47b5-9ebf-eed0a7c82c26|
>  
> h1. Overview
> ||Data Port||All Slots||Free Slots||CPU Cores||Physical Memory||JVM Heap 
> Size||Flink Managed Memory||
> |43501|1|0|12|62.9 GB|922 MB|642 MB|
> h1. Memory
> h2. JVM (Heap/Non-Heap)
> ||Type||Committed||Used||Maximum||
> |Heap|922 MB|575 MB|922 MB|
> |Non-Heap|68.8 MB|64.3 MB|-1 B|
> |Total|991 MB|639 MB|922 MB|
> h2. Outside JVM
> ||Type||Count||Used||Capacity||
> |Direct|3,292|105 MB|105 MB|
> |Mapped|0|0 B|0 B|
> h1. Network
> h2. Memory Segments
> ||Type||Count||
> |Available|3,194|
> |Total|3,278|
> h1. Garbage Collection
> ||Collector||Count||Time||
> |G1_Young_Generation|13|336|
> |G1_Old_Generation|1|21|
>Reporter: Ankur Goenka
>Assignee: Yun Gao
>Priority: Major
>  Labels: beam
> Attachments: 1uruvakHxBu.png, 3aDKQ24WvKk.png, Po89UGDn58V.png, 
> jmx_dump.json, jmx_dump_detailed.json, jstack_129827.log, jstack_163822.log, 
> jstack_66985.log
>
>
> I am running a fairly complex pipleline with 200+ task.
> The pipeline works fine with small data (order of 10kb input) but gets stuck 
> with a slightly larger data (300kb input).
>  
> The task gets stuck while writing the output toFlink, more specifically it 
> gets stuck while requesting memory segment in local buffer pool. The Task 
> manager UI shows that it has enough memory and memory segments to work with.
> The relevant stack trace is 
> {quote}"grpc-default-executor-0" #138 daemon prio=5 os_prio=0 
> tid=0x7fedb0163800 nid=0x30b7f in Object.wait() [0x7fedb4f9]
>  java.lang.Thread.State: TIMED_WAITING (on object monitor)
>  at (C/C++) 0x7fef201c7dae (Unknown Source)
>  at (C/C++) 0x7fef1f2aea07 (Unknown Source)
>  at (C/C++) 0x7fef1f241cd3 (Unknown Source)
>  at java.lang.Object.wait(Native Method)
>  - waiting on <0xf6d56450> (a java.util.ArrayDeque)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:247)
>  - locked <0xf6d56450> (a java.util.ArrayDeque)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:204)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:213)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:144)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:107)
>  at 
> org.apache.flink.runtime.operators.shipping.OutputCollector.collect(OutputCollector.java:65)
>  at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:42)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:26)
>  at 
> org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.collect(ChainedFlatMapDriver.java:80)
>  at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction$MyDataReceiver.accept(FlinkExecutableStageFunct

[GitHub] [flink] YuvalItzchakov commented on issue #9780: [FLINK-14042] [flink-table-planner] Fix TemporalTable row schema always inferred as nullable

2019-10-01 Thread GitBox
YuvalItzchakov commented on issue #9780: [FLINK-14042] [flink-table-planner] 
Fix TemporalTable row schema always inferred as nullable
URL: https://github.com/apache/flink/pull/9780#issuecomment-537200121
 
 
   Anyone?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9832: [FLINK-11843] Bind lifespan of 
Dispatcher to leader session
URL: https://github.com/apache/flink/pull/9832#issuecomment-537049332
 
 
   
   ## CI report:
   
   * abaae048fef753455970fac9d6ab421b660b0536 : UNKNOWN
   * b96c63552ccd322adae7a41a410615e95b538ece : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867595)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9813: [FLINK-14287] Decouple leader address from LeaderContender

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9813: [FLINK-14287] Decouple leader address 
from LeaderContender 
URL: https://github.com/apache/flink/pull/9813#issuecomment-536324105
 
 
   
   ## CI report:
   
   * b7a7e7b6dc8a16d23097911aa98852e2cf4d9c00 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129601641)
   * 3b8410b82ec366014551e35fb8791c6ad623faf9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129707833)
   * 96ef0ac36eb37792fd7837619cbca7d8a1272090 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129837942)
   * 4c3d9db5f51b815c810be6fe6ae82c62452f14b4 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/129854608)
   * 258622b3043245884ed9d567124cef31ffc0571a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867500)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9812: [FLINK-14286] Remove Akka specific parsing from LeaderConnectionInfo

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9812: [FLINK-14286] Remove Akka specific 
parsing from LeaderConnectionInfo
URL: https://github.com/apache/flink/pull/9812#issuecomment-536322371
 
 
   
   ## CI report:
   
   * c1ca1131b02a2c555cd8386fd07aa1cfccbab161 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129600831)
   * da85c13656e6cf02fcadbadf50df0c201f07970c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129707797)
   * 2abee623fcedcbf8c61ce455cbf59bd59b75dcf4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129837903)
   * c1efb87828fc98fa2f7ff5b07b082b9dfc67c501 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129854578)
   * 9b2e7712bc24fca3f77aa02993aa3afb89de7bfc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867450)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9807: [FLINK-14280] Introduce DispatcherRunner

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9807: [FLINK-14280] Introduce 
DispatcherRunner 
URL: https://github.com/apache/flink/pull/9807#issuecomment-536314902
 
 
   
   ## CI report:
   
   * 9ef8fe986e4b06d6ae8512e5051933458363511c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129597654)
   * 748954650809fd2fa18572d1c925d3b647a55906 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129707578)
   * 1ca7b625300da1a6a522fd1ba8427f5398f61ed5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129834475)
   * d0bd4cbe32866ed3f6cceafe6dd65133f96350b3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129850793)
   * bf3c8311655872a9def94d46bf1363bc89c4715f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867229)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-12675) Event time synchronization in Kafka consumer

2019-10-01 Thread Thomas Weise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-12675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16942196#comment-16942196
 ] 

Thomas Weise commented on FLINK-12675:
--

[~gerardg] yes it is still the plan to implement source synchronization for the 
Kafka consumer. Just have not gotten around to work on this. If someone else is 
interested to take this over, please let me know.

 

The implementation will be quite different from the Kinesis consumer. 
Partitions are merged in the Kafka client and we will have to pause/resume 
individual partitions while in the Kinesis case we can just control the 
consumer thread in the record emitter.

> Event time synchronization in Kafka consumer
> 
>
> Key: FLINK-12675
> URL: https://issues.apache.org/jira/browse/FLINK-12675
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Major
>
> Integrate the source watermark tracking into the Kafka consumer and implement 
> the sync mechanism (different consumer model, compared to Kinesis).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9832: [FLINK-11843] Bind lifespan of 
Dispatcher to leader session
URL: https://github.com/apache/flink/pull/9832#issuecomment-537049332
 
 
   
   ## CI report:
   
   * abaae048fef753455970fac9d6ab421b660b0536 : UNKNOWN
   * b96c63552ccd322adae7a41a410615e95b538ece : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867595)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9813: [FLINK-14287] Decouple leader address from LeaderContender

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9813: [FLINK-14287] Decouple leader address 
from LeaderContender 
URL: https://github.com/apache/flink/pull/9813#issuecomment-536324105
 
 
   
   ## CI report:
   
   * b7a7e7b6dc8a16d23097911aa98852e2cf4d9c00 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129601641)
   * 3b8410b82ec366014551e35fb8791c6ad623faf9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129707833)
   * 96ef0ac36eb37792fd7837619cbca7d8a1272090 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129837942)
   * 4c3d9db5f51b815c810be6fe6ae82c62452f14b4 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/129854608)
   * 258622b3043245884ed9d567124cef31ffc0571a : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867500)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9807: [FLINK-14280] Introduce DispatcherRunner

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9807: [FLINK-14280] Introduce 
DispatcherRunner 
URL: https://github.com/apache/flink/pull/9807#issuecomment-536314902
 
 
   
   ## CI report:
   
   * 9ef8fe986e4b06d6ae8512e5051933458363511c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129597654)
   * 748954650809fd2fa18572d1c925d3b647a55906 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129707578)
   * 1ca7b625300da1a6a522fd1ba8427f5398f61ed5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129834475)
   * d0bd4cbe32866ed3f6cceafe6dd65133f96350b3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129850793)
   * bf3c8311655872a9def94d46bf1363bc89c4715f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867229)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9812: [FLINK-14286] Remove Akka specific parsing from LeaderConnectionInfo

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9812: [FLINK-14286] Remove Akka specific 
parsing from LeaderConnectionInfo
URL: https://github.com/apache/flink/pull/9812#issuecomment-536322371
 
 
   
   ## CI report:
   
   * c1ca1131b02a2c555cd8386fd07aa1cfccbab161 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129600831)
   * da85c13656e6cf02fcadbadf50df0c201f07970c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129707797)
   * 2abee623fcedcbf8c61ce455cbf59bd59b75dcf4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129837903)
   * c1efb87828fc98fa2f7ff5b07b082b9dfc67c501 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129854578)
   * 9b2e7712bc24fca3f77aa02993aa3afb89de7bfc : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867450)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9735: [FLINK-14156][runtime] Submit timer trigger letters to task's mailbox with operator's precedence

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9735: [FLINK-14156][runtime] Submit timer 
trigger letters to task's mailbox with operator's precedence
URL: https://github.com/apache/flink/pull/9735#issuecomment-533554096
 
 
   
   ## CI report:
   
   * 51167d9aaf8082150738dc5bd8b9e2319dd01570 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/128514569)
   * 926e1a0596edad1c2f66f4b29a7ed9352dc66e0d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129448620)
   * a07b074ce9306db8a659dda3a1fa97e3d889bfeb : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129452293)
   * eda54ed9b1ee6f2648da1e1684fbe7fa1a3b631e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129455795)
   * c33c3063cea4b510422215f62da33b509702a49f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129471120)
   * 60ce9b219ebf00fca3b84b9e935e5642514eb3ba : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129872529)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on issue #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session

2019-10-01 Thread GitBox
tillrohrmann commented on issue #9832: [FLINK-11843] Bind lifespan of 
Dispatcher to leader session
URL: https://github.com/apache/flink/pull/9832#issuecomment-537131326
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on issue #9813: [FLINK-14287] Decouple leader address from LeaderContender

2019-10-01 Thread GitBox
tillrohrmann commented on issue #9813: [FLINK-14287] Decouple leader address 
from LeaderContender 
URL: https://github.com/apache/flink/pull/9813#issuecomment-537131144
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on issue #9812: [FLINK-14286] Remove Akka specific parsing from LeaderConnectionInfo

2019-10-01 Thread GitBox
tillrohrmann commented on issue #9812: [FLINK-14286] Remove Akka specific 
parsing from LeaderConnectionInfo
URL: https://github.com/apache/flink/pull/9812#issuecomment-537130961
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on issue #9807: [FLINK-14280] Introduce DispatcherRunner

2019-10-01 Thread GitBox
tillrohrmann commented on issue #9807: [FLINK-14280] Introduce DispatcherRunner 
URL: https://github.com/apache/flink/pull/9807#issuecomment-537130667
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9821: [FLINK-14298] Replace LeaderContender#getAddress with #getDescription

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9821: [FLINK-14298] Replace 
LeaderContender#getAddress with #getDescription
URL: https://github.com/apache/flink/pull/9821#issuecomment-536618253
 
 
   
   ## CI report:
   
   * 9ca9df3eaa22965444373a3b6142798ca5559f50 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129712419)
   * e2b8bb516e06b5337d1f6172ac00c4f4b42f160a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129808007)
   * 7ded64e1fca1abbbcdeb55604e2ab60e351e23c0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129837967)
   * 4b668e45a1350037e7d738314dd5889f71098a8b : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/129854643)
   * 7e004fc6d0620152b3083019fedad411172b6077 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867555)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10995) Copy intermediate serialization results only once for broadcast mode

2019-10-01 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-10995:
---
Fix Version/s: 1.10.0
Affects Version/s: 1.9.0

> Copy intermediate serialization results only once for broadcast mode
> 
>
> Key: FLINK-10995
> URL: https://issues.apache.org/jira/browse/FLINK-10995
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Affects Versions: 1.8.0, 1.9.0
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The emitted records from operator would be firstly serialized into 
> intermediate bytes array in {{RecordSerializer}}, then copy the intermediate 
> results into target buffers for different sub partitions.  For broadcast 
> mode, the same intermediate results would be copied as many times as the 
> number of sub partitions, and this would affect the performance seriously in 
> large scale jobs.
> We can copy to only one target buffer which would be shared by all the sub 
> partitions to reduce the overheads. For emitting latency marker in broadcast 
> mode, we should flush the previous shared target buffers first, and then 
> request a new buffer for the target sub partition to send latency marker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-10995) Copy intermediate serialization results only once for broadcast mode

2019-10-01 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-10995.
--

> Copy intermediate serialization results only once for broadcast mode
> 
>
> Key: FLINK-10995
> URL: https://issues.apache.org/jira/browse/FLINK-10995
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Affects Versions: 1.8.0, 1.9.0
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The emitted records from operator would be firstly serialized into 
> intermediate bytes array in {{RecordSerializer}}, then copy the intermediate 
> results into target buffers for different sub partitions.  For broadcast 
> mode, the same intermediate results would be copied as many times as the 
> number of sub partitions, and this would affect the performance seriously in 
> large scale jobs.
> We can copy to only one target buffer which would be shared by all the sub 
> partitions to reduce the overheads. For emitting latency marker in broadcast 
> mode, we should flush the previous shared target buffers first, and then 
> request a new buffer for the target sub partition to send latency marker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-10995) Copy intermediate serialization results only once for broadcast mode

2019-10-01 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16942129#comment-16942129
 ] 

Piotr Nowojski commented on FLINK-10995:


After modifying the benchmarks to actually use this code path performance 
improvement is easily visible:

http://codespeed.dak8s.net:8000/timeline/#/?exe=1,3&ben=networkBroadcastThroughput&env=2&revs=200&equid=off&quarts=on&extr=on



> Copy intermediate serialization results only once for broadcast mode
> 
>
> Key: FLINK-10995
> URL: https://issues.apache.org/jira/browse/FLINK-10995
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Affects Versions: 1.8.0
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The emitted records from operator would be firstly serialized into 
> intermediate bytes array in {{RecordSerializer}}, then copy the intermediate 
> results into target buffers for different sub partitions.  For broadcast 
> mode, the same intermediate results would be copied as many times as the 
> number of sub partitions, and this would affect the performance seriously in 
> large scale jobs.
> We can copy to only one target buffer which would be shared by all the sub 
> partitions to reduce the overheads. For emitting latency marker in broadcast 
> mode, we should flush the previous shared target buffers first, and then 
> request a new buffer for the target sub partition to send latency marker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9813: [FLINK-14287] Decouple leader address from LeaderContender

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9813: [FLINK-14287] Decouple leader address 
from LeaderContender 
URL: https://github.com/apache/flink/pull/9813#issuecomment-536324105
 
 
   
   ## CI report:
   
   * b7a7e7b6dc8a16d23097911aa98852e2cf4d9c00 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129601641)
   * 3b8410b82ec366014551e35fb8791c6ad623faf9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129707833)
   * 96ef0ac36eb37792fd7837619cbca7d8a1272090 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129837942)
   * 4c3d9db5f51b815c810be6fe6ae82c62452f14b4 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/129854608)
   * 258622b3043245884ed9d567124cef31ffc0571a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867500)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9811: [FLINK-14285] Remove generics from Dispatcher factories

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9811: [FLINK-14285] Remove generics from 
Dispatcher factories 
URL: https://github.com/apache/flink/pull/9811#issuecomment-536320588
 
 
   
   ## CI report:
   
   * 658cddd4424505c1a904a926ad6e626abbd85cd7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129600090)
   * 5b4df7e3bd41053dcce049edf420ed584a42b98a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129707753)
   * 442ea9d2bcc705ba162ffe6bd7c0d127a372c06e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129837865)
   * 2f46b0debc0a94a969510fbb07b8f1b5e6b50710 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129850925)
   * 65e72a4a5a51920697475dbc3543642b0537cdd5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867404)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9810: [FLINK-14284] Add shut down future to Dispatcher

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9810: [FLINK-14284] Add shut down future to 
Dispatcher
URL: https://github.com/apache/flink/pull/9810#issuecomment-536320582
 
 
   
   ## CI report:
   
   * 3ea0f47e7dd76ec115db4ef583b416685107604b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129600085)
   * e73e7e0a72952be963e29226bbb9b94487dafde6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129707704)
   * eed7f48de7520824f5674d0959a91b1dd9caa654 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129837834)
   * a4cecd0d80ce9728c96d70195e82a09958a0c90b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129850891)
   * a908e21bc54f2052fc7b4b81261af2324fd32920 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867370)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on issue #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session

2019-10-01 Thread GitBox
TisonKun commented on issue #9832: [FLINK-11843] Bind lifespan of Dispatcher to 
leader session
URL: https://github.com/apache/flink/pull/9832#issuecomment-537111552
 
 
   Thanks for opening this pull request. The description and commit history 
looks good. Will check out the branch and do another pass of review later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session

2019-10-01 Thread GitBox
TisonKun commented on a change in pull request #9832: [FLINK-11843] Bind 
lifespan of Dispatcher to leader session
URL: https://github.com/apache/flink/pull/9832#discussion_r330140600
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/runner/DispatcherLeaderProcessImplTest.java
 ##
 @@ -239,6 +246,109 @@ public void 
closeAsync_duringJobRecovery_preventsDispatcherServiceCreation() thr
}
}
 
+   @Test
+   public void onRemovedJobGraph_cancelsRunningJob() throws Exception {
 
 Review comment:
   IIRC method name always follows lowerCamelCase?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9809: [FLINK-14282] Simplify DispatcherResourceManagerComponent hierarchy

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9809: [FLINK-14282] Simplify 
DispatcherResourceManagerComponent hierarchy
URL: https://github.com/apache/flink/pull/9809#issuecomment-536318695
 
 
   
   ## CI report:
   
   * 531cd688ceb25b544833025d9b556a0d686b29c4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129599298)
   * a3ba53aa778e777d8f38750eb6a5e5147fd21654 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129707662)
   * db3d62f7f74f3be33d65588f76a8a082f9fccfd0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129837805)
   * 544d687c37a1597737fd35747d6fa7c5740c2c40 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129850857)
   * 5fdd2823b2bdbd6dcd7b8172369b41fe5438c3ce : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867317)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9808: [FLINK-14281] Add DispatcherRunner#getShutDownFuture

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9808: [FLINK-14281] Add 
DispatcherRunner#getShutDownFuture
URL: https://github.com/apache/flink/pull/9808#issuecomment-536316924
 
 
   
   ## CI report:
   
   * 02c93d5d7d69324c33461040fec5c4014eb2b8d6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129598507)
   * 618f898c4a74c7f35f66ba7ab37e08cf8d66581e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129707614)
   * 4f7e44bebc6f5dd1b222e68658e40b7f2002e5fd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129834505)
   * ca5f4450fe3b2a33e39ad1b01b0381b8f7996d78 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129850827)
   * 3d2171647776f5f8537f7e130026f14c99b3fef0 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867287)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9831: [FLINK-14278] Extend DispatcherResourceManagerComponentFactory.create to take ioExecutor

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9831: [FLINK-14278] Extend 
DispatcherResourceManagerComponentFactory.create to take ioExecutor
URL: https://github.com/apache/flink/pull/9831#issuecomment-537027384
 
 
   
   ## CI report:
   
   * e1d37ab03c1bc62b3d45c45ee11d616ed2c78b0c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129858618)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #9810: [FLINK-14284] Add shut down future to Dispatcher

2019-10-01 Thread GitBox
TisonKun commented on a change in pull request #9810: [FLINK-14284] Add shut 
down future to Dispatcher
URL: https://github.com/apache/flink/pull/9810#discussion_r330125371
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java
 ##
 @@ -603,7 +616,7 @@ private JobManagerRunner 
startJobManagerRunner(JobManagerRunner jobManagerRunner
 
@Override
public CompletableFuture shutDownCluster() {
-   closeAsync();
+   shutDownFuture.complete(ApplicationStatus.SUCCEEDED);
 
 Review comment:
   Agree revisit this part when we have proper per-job mode support. Noted.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14184) Provide a stage listener API to be invoked per task manager

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14184:
---
Component/s: Runtime / Coordination

> Provide a stage listener API to be invoked per task manager
> ---
>
> Key: FLINK-14184
> URL: https://issues.apache.org/jira/browse/FLINK-14184
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Stephen Connolly
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Often times a topology has nodes that need to use 3rd party APIs. Not every 
> 3rd party API is written in a good style for usage from within Flink.
> At present, implementing a `Rich___` will provide each stage with the 
> `open(...)` and `close()` callbacks, as the stage is accepted for execution 
> on each task manager.
> There is, however, a need for being able to listen for the first stage being 
> opened on any given task manager as well as the last stage being closed. 
> Critically the last stage being closed is the opportunity to release any 
> resources that are shared across multiple stages in the topology, e.g. 
> Database connection pools, Async HTTP Client thread pools, etc.
> Without such a clean-up hook, the connections and threads can act as GC roots 
> that prevent the topology's classloader from being unloaded and result in a 
> memory and resource leak in the task manager... nevermind that if it is a 
> Database connection pool, it may also be consuming resources from the 
> database.
> There are three workarounds available at present:
>  # Each stage just allocates its own resources and cleans up afterwards. This 
> is, in many ways, the ideal... however this can result in higher than 
> intended database connections, e.f. as each stage that accesses the database 
> stage needs to have a separate database connection rather than letting the 
> whole topology share the use of one or two connections through a connection 
> pool. Similarly, if the 3rd party library uses a static singleton for the 
> whole classloader there is no way for the independent stages to know when it 
> is safe to shut down the singleton
>  # Implement a reference counting proxy for the 3rd party API. This is a lot 
> of work, you need to ensure that deserialization of the proxy returns a 
> classloader singleton (so you can maintain the reference counts) and if the 
> count goes wrong you have leaked the resource
>  # Use a ReferenceQueue backed proxy. This is even more complex than 
> implementing reference counting, but has the advantage of not requiring the 
> count be maintained correctly. On the other hand, it does not provide for 
> eager release of the resources.
> If Flink provided a listener contract that could be registered with the 
> execution environment then this would allow the resources to be cleared out. 
> My proposed interface would look something like
> {code:java}
> public interface EnvironmentLocalTopologyListener extends Serializable {
>   /** 
>* Called immediately prior to the first {@link 
> RichFunction#open(Configuration)}
>* being invoked for the topology on the current task manager JVM for this
>* classloader. Will not be called again unless {#close()} has been invoked 
> first.
>* Use this method to eagerly initialize any ClassLoader scoped resources 
> that
>* are pooled across the stages of the topology.
>*
>* @param parameters // I am unsure if this makes sense
>*/ 
>   default void open(Configuration parameters) throws Exception {}
>   /**
>* Called after the last {@link RichFunction#close()} has completed and the
>* topology is effectively being stopped (for the current ClassLoader).
>* This method will only be invoked if a call to {@link 
> #open(Configuration)}
>* was attempted, and will be invoked irrespective of whether the call to
>* {@link #open(Configuration)} terminated normally or exceptionally.
>* Use this method to release any ClassLoader scoped resources that have 
> been
>* pooled across the stages of the topology.
>*/
>   default void close() throws Exception {}
>   /**
>* Decorate the threads that are used to invoke the stages of the topology.
>* Use this method, for example, to seed the {@link org.slf4j.MDC} with
>* topology specific details, e.g.
>* 
>* Runnable decorate(Runnable task) {
>*   return () -> {
>* try (MDC.MDCClosable ctx = MDC.putCloseable("foo", "bar")){
>*   task.run();
>* };
>* }
>* 
>*
>* @param task // might not be the most appropriate type, I haven't 
>* // checked how Flink implements dispatch. May or may not
>* // want a parameters argument

[jira] [Updated] (FLINK-14254) Introduce BatchFileSystemSink

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14254:
---
Component/s: Table SQL / Planner

> Introduce BatchFileSystemSink
> -
>
> Key: FLINK-14254
> URL: https://issues.apache.org/jira/browse/FLINK-14254
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>
> Introduce BatchFileSystemSink to support all table file system connector with 
> partition support.
> BatchFileSystemSink use PartitionWriter to write:
>  # DynamicPartitionWriter
>  # GroupedPartitionWriter
>  # NonPartitionWriter
> BatchFileSystemSink use FileCommitter to commit temporary files.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13925) ClassLoader in BlobLibraryCacheManager is not using context class loader

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-13925:
---
Component/s: Runtime / Coordination

> ClassLoader in BlobLibraryCacheManager is not using context class loader
> 
>
> Key: FLINK-13925
> URL: https://issues.apache.org/jira/browse/FLINK-13925
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.8.1, 1.9.0
>Reporter: Jan Lukavský
>Assignee: Jan Lukavský
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.3, 1.9.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Use thread's current context classloader as parent class loader of flink user 
> code class loaders.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14060) Set slot sharing groups according to pipelined regions

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14060:
---
Component/s: Runtime / Coordination

> Set slot sharing groups according to pipelined regions
> --
>
> Key: FLINK-14060
> URL: https://issues.apache.org/jira/browse/FLINK-14060
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Priority: Major
>
> {{StreamingJobGraphGenerator}} set slot sharing group for operators at 
> compiling time.
>  * Identify pipelined regions, with respect to 
> {{allSourcesInSamePipelinedRegion}}
>  * Set slot sharing groups according to pipelined regions 
>  ** By default, each pipelined region should go into a separate slot sharing 
> group
>  ** If the user sets operators in multiple pipelined regions into same slot 
> sharing group, it should be respected
> This step should not introduce any behavior changes, given that later 
> scheduled pipelined regions can reuse slots from previous scheduled pipelined 
> regions. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13984) Separate on-heap and off-heap managed memory pools

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-13984:
---
Component/s: Runtime / Coordination

> Separate on-heap and off-heap managed memory pools
> --
>
> Key: FLINK-13984
> URL: https://issues.apache.org/jira/browse/FLINK-13984
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Assignee: Andrey Zagrebin
>Priority: Major
>
> * Update {{MemoryManager}} to have two separated pools.
>  * Extend {{MemoryManager}} interfaces to specify which pool to allocate 
> memory from.
> Implement this step in common code paths for the legacy / new mode. For the 
> legacy mode, depending to the configured memory type, we can set one of the 
> two pools to the managed memory size and always allocate from this pool, 
> leaving the other pool empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14156) Execute/run processing timer triggers taking into account operator level mailbox loops

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14156:
---
Component/s: Runtime / Task

> Execute/run processing timer triggers taking into account operator level 
> mailbox loops
> --
>
> Key: FLINK-14156
> URL: https://issues.apache.org/jira/browse/FLINK-14156
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Alex
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> With FLINK-12481, the timer triggers are executed by the mailbox thread and 
> passed to the mailbox with the maximum priority.
> In case of operators that use {{mailbox.yield()}} (introduced in 
> FLINK-13248), current approach may execute timer triggers that belong to an 
> upstream operator. Such timer trigger, may potentially call 
> {{processElement|Watermark()}} which eventually would come back to the 
> current operator. This situation may be similar to FLINK-13063.
> To avoid this, the proposal is to set mailbox letters priorities of timer 
> triggers with the priority of the operator that the trigger belongs to.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13982) Implement memory calculation logics

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-13982:
---
Component/s: (was: Runtime / Task)
 Runtime / Coordination

> Implement memory calculation logics
> ---
>
> Key: FLINK-13982
> URL: https://issues.apache.org/jira/browse/FLINK-13982
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * Introduce new configuration options
>  * Introduce data structures and utilities.
>  ** Data structure to store memory / pool sizes of task executor
>  ** Utility for calculating memory / pool sizes from configuration
>  ** Utility for generating dynamic configurations
>  ** Utility for generating JVM parameters
> This step should not introduce any behavior changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14239) Emitting the max watermark in StreamSource#run may cause it to arrive the downstream early

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14239:
---
Component/s: API / DataStream

> Emitting the max watermark in StreamSource#run may cause it to arrive the 
> downstream early
> --
>
> Key: FLINK-14239
> URL: https://issues.apache.org/jira/browse/FLINK-14239
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Reporter: Haibo Sun
>Assignee: Haibo Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For {{Source}}, the max watermark is emitted in {{StreamSource#run}} 
> currently. If some records are also output in {{close}} of 
> {{RichSourceFunction}}, then the max watermark will reach the downstream 
> operator before these records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13982) Implement memory calculation logics

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-13982:
---
Component/s: Runtime / Task

> Implement memory calculation logics
> ---
>
> Key: FLINK-13982
> URL: https://issues.apache.org/jira/browse/FLINK-13982
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * Introduce new configuration options
>  * Introduce data structures and utilities.
>  ** Data structure to store memory / pool sizes of task executor
>  ** Utility for calculating memory / pool sizes from configuration
>  ** Utility for generating dynamic configurations
>  ** Utility for generating JVM parameters
> This step should not introduce any behavior changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13983) Launch task executor with new memory calculation logics

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-13983:
---
Component/s: Runtime / Task

> Launch task executor with new memory calculation logics
> ---
>
> Key: FLINK-13983
> URL: https://issues.apache.org/jira/browse/FLINK-13983
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * Invoke data structures and utilities introduced in Step 2 to generate JVM 
> parameters and dynamic configurations for launching new task executors.
>  ** In startup scripts
>  ** In resource managers
>  * Task executor uses data structures and utilities introduced in Step 2 to 
> set memory pool sizes and slot resource profiles.
>  ** {{MemoryManager}}
>  ** {{ShuffleEnvironment}}
>  ** {{TaskSlotTable}}
> Implement this step as separate code paths only for the new mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13803) Introduce SpillableHeapKeyedStateBackend and all necessities

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-13803:
---
Component/s: Runtime / State Backends

> Introduce SpillableHeapKeyedStateBackend and all necessities
> 
>
> Key: FLINK-13803
> URL: https://issues.apache.org/jira/browse/FLINK-13803
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Yu Li
>Priority: Major
>
> This JIRA aims at introducing a new {{SpillableHeapKeyedStateBackend}} which 
> will reuse most code of the {{HeapKeyedStateBackend}} (probably the only 
> difference is the spill-able one will register a {{HybridStateTable}}), and 
> allow using it in {{FsStateBackend}} and {{MemoryStateBackend}} (only as an 
> option, by default still {{HeapKeyedStateBackend}}) through configuration.
> The related necessities include but are not limited to:
> * A relative backend builder class
> * Relative restore operation classes
> * Necessary configurations for using spill-able backend
> This should be the last JIRA after which the spill-able heap backend feature 
> will become runnable regardless of the stability and performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13813) metrics is different between overview ui and metric response

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-13813:
---
Component/s: Runtime / Web Frontend

> metrics is different between overview ui and metric response
> 
>
> Key: FLINK-13813
> URL: https://issues.apache.org/jira/browse/FLINK-13813
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
> Environment: Flink 1.8.1 with hdfs
>Reporter: lidesheng
>Priority: Major
> Attachments: metrics.png, overview.png
>
>
> After a flink task is over, I get metrics by http request. The record count 
> in response is different with job overview.After a flink task is over, I get 
> metrics by http request. The record count in response is different with job 
> overview.
> get http://x.x.x.x:8081/jobs/57969f3978edf3115354fab1a72fd0c8 returns:
> { "jid": "57969f3978edf3115354fab1a72fd0c8", "name": 
> "3349_cjrw_1566177018152", "isStoppable": false, "state": "FINISHED", ... 
> "vertices": [ \{ "id": "d1cdde18b91ef6ce7c6a1cfdfa9e968d", "name": "CHAIN 
> DataSource (3349_cjrw_1566177018152/SOURCE/0) -> Map 
> (3349_cjrw_1566177018152/AUDIT/0)", "parallelism": 1, "status": "FINISHED", 
> ... "metrics": { "read-bytes": 0, "read-bytes-complete": true, "write-bytes": 
> 555941888, "write-bytes-complete": true, "read-records": 0, 
> "read-records-complete": true, "write-records": 500, 
> "write-records-complete": true } ... } ... ] }}
> But, the metrics by 
> http://x.x.x.x:8081/jobs/57969f3978edf3115354fab1a72fd0c8/vertices/d1cdde18b91ef6ce7c6a1cfdfa9e968d/metrics?get=0.numRecordsOut
>  returns
> [\{"id":"0.numRecordsOut","value":"4084803"}]
> The overview record count is different with task metrics, please view the 
> apppendix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13846) Implement benchmark case on MapState#isEmpty

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-13846:
---
Component/s: Benchmarks

> Implement benchmark case on MapState#isEmpty
> 
>
> Key: FLINK-13846
> URL: https://issues.apache.org/jira/browse/FLINK-13846
> Project: Flink
>  Issue Type: Improvement
>  Components: Benchmarks
>Reporter: Yun Tang
>Priority: Major
>
> If FLINK-13034 merged, we need to implement benchmark case on 
> {{MapState#isEmpty} in https://github.com/dataArtisans/flink-benchmarks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14099) SQL supports timestamp in Long

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14099:
---
Component/s: Table SQL / Planner

> SQL supports timestamp in Long
> --
>
> Key: FLINK-14099
> URL: https://issues.apache.org/jira/browse/FLINK-14099
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: Zijie Lu
>Priority: Major
>
> The rowtime only supports sql timestamp but not long. Can the rowtime field 
> supports long?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14057) Add Remove Other Timers to TimerService

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14057:
---
Component/s: API / DataStream

> Add Remove Other Timers to TimerService
> ---
>
> Key: FLINK-14057
> URL: https://issues.apache.org/jira/browse/FLINK-14057
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Jesse Anderson
>Priority: Major
>
> The TimerService service has the ability to add timers with 
> registerProcessingTimeTimer. This method can be called many times and have 
> different timer times.
> If you want to add a new timer and delete other timers, you have to keep 
> track of all previous timer times and call deleteProcessingTimeTimer for each 
> time. This method forces you to keep track of all previous (unexpired) timers 
> for a key.
> Instead, I suggest overloading registerProcessingTimeTimer with a second 
> boolean argument that will remove all previous timers and set the new timer.
> Note: although I'm using registerProcessingTimeTimer, this applies to 
> registerEventTimeTimer as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13860) Flink Apache Kudu Connector

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-13860:
---
Component/s: Connectors / Common

> Flink Apache Kudu Connector
> ---
>
> Key: FLINK-13860
> URL: https://issues.apache.org/jira/browse/FLINK-13860
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: João Boto
>Priority: Major
>
> Hi..
> I'm the contributor and maintainer of this connector on Bahir-Flink project
> [https://github.com/apache/bahir-flink/tree/master/flink-connector-kudu]
>  
> but seems that flink-connectors on that project are less maintained an its 
> difficult to maintain the code up to date, as PR take a while to be merged 
> and never released any version, which makes it difficult to use easily
>  
> I would like to contribute that code to flink allowing other to contribute 
> and use that connector
>  
> [~fhueske] what do you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9830: [FLINK-14307] Extract JobGraphWriter from JobGraphStore

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9830: [FLINK-14307] Extract JobGraphWriter 
from JobGraphStore
URL: https://github.com/apache/flink/pull/9830#issuecomment-537017897
 
 
   
   ## CI report:
   
   * ed86e6d768a1353423d969e6c9dcf5fd967c4f19 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129854718)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14172) Implement KubeClient with official Java client library for kubernetes

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14172:
---
Component/s: Deployment / Kubernetes

> Implement KubeClient with official Java client library for kubernetes
> -
>
> Key: FLINK-14172
> URL: https://issues.apache.org/jira/browse/FLINK-14172
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Priority: Major
>
> Official Java client library for kubernetes is become more and more active. 
> The new features(such as leader election) and some client 
> implementations(informer, lister, cache) are better. So we should use the 
> official java client for kubernetes in flink.
> https://github.com/kubernetes-client/java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14062) Set managed memory fractions according to slot sharing groups

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14062:
---
Component/s: Runtime / Task

> Set managed memory fractions according to slot sharing groups
> -
>
> Key: FLINK-14062
> URL: https://issues.apache.org/jira/browse/FLINK-14062
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Xintong Song
>Priority: Major
>
> * For operators with specified {{ResourceSpecs}}, calculate fractions 
> according to operators {{ResourceSpecs}}
>  * For operators with unknown {{ResourceSpecs}}, calculate fractions 
> according to number of operators using managed memory
> This step should not introduce any behavior changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14192) Enable the dynamic slot allocation feature.

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14192:
---
Component/s: Runtime / Coordination

> Enable the dynamic slot allocation feature.
> ---
>
> Key: FLINK-14192
> URL: https://issues.apache.org/jira/browse/FLINK-14192
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Priority: Major
>
> * ResourceManager uses TaskExecutor registered default slot resource 
> profiles, instead of that calculated on RM side.
>  * ResourceManager uses actual requested resource profiles for slot requests, 
> instead assuming default profile for all requests.
>  * TaskExecutor bookkeep with requested resource profiles instead, instead of 
> assuming default profile for all requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14061) Introduce managed memory fractions to StreamConfig

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14061:
---
Component/s: Runtime / Task

> Introduce managed memory fractions to StreamConfig
> --
>
> Key: FLINK-14061
> URL: https://issues.apache.org/jira/browse/FLINK-14061
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Xintong Song
>Priority: Major
>
> Introduce {{fracManagedMemOnHeap}} and {{fracManagedMemOffHeap}} in 
> {{StreamConfig}}, so they can be set by {{StreamingJobGraphGenerator}} and 
> used by operators in runtime. 
> This step should not introduce any behavior changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14191) Extend SlotManager to support dynamic slot allocation on pending task executors

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14191:
---
Component/s: Runtime / Coordination

> Extend SlotManager to support dynamic slot allocation on pending task 
> executors
> ---
>
> Key: FLINK-14191
> URL: https://issues.apache.org/jira/browse/FLINK-14191
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Priority: Major
>
> * Introduce PendingTaskManagerResources
>  * Create PendingTaskManagerSlot on allocation, from 
> PendingTaskManagerResource
>  * Map registered task executors to matching PendingTaskManagerResources, and 
> allocate slots for corresponding PendingTaskManagerSlots
> Convert registered task executor free slots into equivalent available 
> resources according to default slot resource profiles.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14134) Introduce LimitableTableSource to optimize limit

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14134:
---
Component/s: Connectors / Hive

> Introduce LimitableTableSource to optimize limit
> 
>
> Key: FLINK-14134
> URL: https://issues.apache.org/jira/browse/FLINK-14134
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Jingsong Lee
>Priority: Major
>
> SQL: select *from t1 limit 1
> Now source will scan full table, if we can introduce LimitableTableSource, 
> let source know the limit line, source can just read one row is OK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14063) Operators use fractions to decide how many managed memory to allocate

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14063:
---
Component/s: Runtime / Task

> Operators use fractions to decide how many managed memory to allocate
> -
>
> Key: FLINK-14063
> URL: https://issues.apache.org/jira/browse/FLINK-14063
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Xintong Song
>Priority: Major
>
> * Operators allocate memory segments with the amount returned by 
> {{MemoryManager#computeNumberOfPages}}.
>  * Operators reserve memory with the amount returned by 
> {{MemoryManager#computeMemorySize}}. 
> This step activates the new fraction based managed memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9832: [FLINK-11843] Bind lifespan of Dispatcher to leader session

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9832: [FLINK-11843] Bind lifespan of 
Dispatcher to leader session
URL: https://github.com/apache/flink/pull/9832#issuecomment-537049332
 
 
   
   ## CI report:
   
   * abaae048fef753455970fac9d6ab421b660b0536 : UNKNOWN
   * b96c63552ccd322adae7a41a410615e95b538ece : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867595)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14190) Extend SlotManager to support dynamic slot allocation on registered TaskExecutors.

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14190:
---
Component/s: Runtime / Coordination

> Extend SlotManager to support dynamic slot allocation on registered 
> TaskExecutors.
> --
>
> Key: FLINK-14190
> URL: https://issues.apache.org/jira/browse/FLINK-14190
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Priority: Major
>
> * Bookkeep task manager available resources
>  * Match between slot requests and task executor resources
>  ** Find task executors with matching available resources for slot requests
>  ** Find matching pending slot requests for task executors with new available 
> resources
>  * Create TaskManagerSlot on allocation and remove on free.
>  * Request slot from TaskExecutor with resource profiles.
> Use RM calculated default resource profiles for all slot requests. Convert 
> free slots in SlotReports into equivalent available resources according to 
> default slot resource profiles.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14194) Clean-up of legacy mode.

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14194:
---
Component/s: Runtime / Coordination

> Clean-up of legacy mode.
> 
>
> Key: FLINK-14194
> URL: https://issues.apache.org/jira/browse/FLINK-14194
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14199) Only use dedicated/named classes for mailbox letters.

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14199:
---
Component/s: Runtime / Task

> Only use dedicated/named classes for mailbox letters.
> -
>
> Key: FLINK-14199
> URL: https://issues.apache.org/jira/browse/FLINK-14199
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Reporter: Arvid Heise
>Priority: Major
> Attachments: Screenshot 2019-08-22 at 12.59.59.png
>
>
> Unnamed lambdas make it difficult to debug the code
>  !Screenshot 2019-08-22 at 12.59.59.png! 
> Note: there are cases of the mailbox usage that need a Future to track when a 
> letter is executed and get success/exception result. Current implementation 
> (in MailboxExecutor) piggybacks on java.util.concurrent.FutureTask and the 
> fact that it implements Java Executor api.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14193) Update RestAPI / Web UI

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14193:
---
Component/s: Runtime / Web Frontend
 Runtime / Coordination

> Update RestAPI / Web UI
> ---
>
> Key: FLINK-14193
> URL: https://issues.apache.org/jira/browse/FLINK-14193
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Runtime / Web Frontend
>Reporter: Xintong Song
>Priority: Major
>
> * Update RestAPI / WebUI to properly display information of available 
> resources and allocated slots of task executors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14189) Extend TaskExecutor to support dynamic slot allocation

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14189:
---
Component/s: Runtime / Coordination

> Extend TaskExecutor to support dynamic slot allocation
> --
>
> Key: FLINK-14189
> URL: https://issues.apache.org/jira/browse/FLINK-14189
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Priority: Major
>
> * TaskSlotTable
>  ** Bookkeep task manager available resources
>  ** Add and implement interface for dynamic allocating slot (with resource 
> profile instead of slot index)
>  ** Create slot report with dynamic allocated slots and remaining available 
> resources
>  * TaskExecutor
>  ** Support request slot with resource profile rather than slot id.
> The slot report still contain status of legacy free slots. When 
> ResourceManager requests slots with slot id, convert it to default slot 
> resource profiles for bookkeeping.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14188) TaskExecutor derive and register with default slot resource profile

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14188:
---
Component/s: Runtime / Coordination

> TaskExecutor derive and register with default slot resource profile
> ---
>
> Key: FLINK-14188
> URL: https://issues.apache.org/jira/browse/FLINK-14188
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Priority: Major
>
> * Introduce config option for defaultSlotFraction
>  * Derive default slot resource profile from the new config option, or the 
> legacy config option "taskmanager.numberOfTaskSlots".
>  * Register task executor with the default slot resource profile.
> This step should not introduce any behavior changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13854) Support Aggregating in Join and CoGroup

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-13854:
---
Component/s: API / DataStream

> Support Aggregating in Join and CoGroup
> ---
>
> Key: FLINK-13854
> URL: https://issues.apache.org/jira/browse/FLINK-13854
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Affects Versions: 1.9.0
>Reporter: Jiayi Liao
>Priority: Major
>
> In WindowStream we can use  windowStream.aggregate(AggregateFunction, 
> WindowFunction) to aggregate input records in real-time.   
> I think we should support similar api in JoinedStreams and CoGroupStreams, 
> because it's a very huge cost by storing the records log in state backend, 
> especially when we don't need the specific records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9815: [FLINK-14117][docs-zh] Translate changes on index page to Chinese

2019-10-01 Thread GitBox
flinkbot edited a comment on issue #9815: [FLINK-14117][docs-zh] Translate 
changes on index page to Chinese
URL: https://github.com/apache/flink/pull/9815#issuecomment-536343016
 
 
   
   ## CI report:
   
   * 615acdb2511760c55f8831934f710678a8962acc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129610971)
   * 38894aa30a82d5763ad8137e176ac7d78ee13178 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129652562)
   * 4b35c7d8ce5d86f030037b924a943f857d7f8a01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129684998)
   * 9099b3a2a220c0d289899f05cea6714fc281d800 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129802867)
   * 90d7fce7f9b27fe52e84c65545c701609b9a3ea9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129858575)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13986) Clean-up of legacy mode

2019-10-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-13986:
---
Component/s: Runtime / Task

> Clean-up of legacy mode
> ---
>
> Key: FLINK-13986
> URL: https://issues.apache.org/jira/browse/FLINK-13986
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Xintong Song
>Priority: Major
>
> * Fix / update / remove test cases for legacy mode
>  * Deprecate / remove legacy config options.
>  * Remove legacy code paths
>  * Remove the switch for legacy / new mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   >