[jira] [Updated] (FLINK-3701) Cant call execute after first execution
[ https://issues.apache.org/jira/browse/FLINK-3701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolaas Steenbergen updated FLINK-3701: Description: in the scala shell, local mode, version 1.0 this works: {code} Scala-Flink> var b = env.fromElements("a","b") Scala-Flink> b.print Scala-Flink> var c = env.fromElements("c","d") Scala-Flink> c.print {code} in the current master (after c.print) this leads to : {code} java.lang.NullPointerException at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:1031) at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:961) at org.apache.flink.api.java.ScalaShellRemoteEnvironment.execute(ScalaShellRemoteEnvironment.java:70) at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:855) at org.apache.flink.api.java.DataSet.collect(DataSet.java:410) at org.apache.flink.api.java.DataSet.print(DataSet.java:1605) at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1615) at .(:56) at .() at .(:7) at .() at $print() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:734) at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:983) at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:573) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:604) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:568) at scala.tools.nsc.interpreter.ILoop.reallyInterpret$1(ILoop.scala:760) at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:805) at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:717) at scala.tools.nsc.interpreter.ILoop.processLine$1(ILoop.scala:581) at scala.tools.nsc.interpreter.ILoop.innerLoop$1(ILoop.scala:588) at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:591) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:882) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837) at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:837) at org.apache.flink.api.scala.FlinkShell$.startShell(FlinkShell.scala:199) at org.apache.flink.api.scala.FlinkShell$.main(FlinkShell.scala:127) at org.apache.flink.api.scala.FlinkShell.main(FlinkShell.scala) {code} was: in version 1.0 this works: {code} Scala-Flink> var b = env.fromElements("a","b") Scala-Flink> b.print Scala-Flink> var c = env.fromElements("c","d") Scala-Flink> c.print {code} in the current master (after c.print) this leads to : {code} java.lang.NullPointerException at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:1031) at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:961) at org.apache.flink.api.java.ScalaShellRemoteEnvironment.execute(ScalaShellRemoteEnvironment.java:70) at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:855) at org.apache.flink.api.java.DataSet.collect(DataSet.java:410) at org.apache.flink.api.java.DataSet.print(DataSet.java:1605) at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1615) at .(:56) at .() at .(:7) at .() at $print() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:734) at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:983) at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:573) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:604) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:568) at scala.tools.nsc.interpreter.ILoop.reallyInterpret$1(ILoop.scala:760) at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:805) at
[jira] [Updated] (FLINK-3701) Cant call execute after first execution
[ https://issues.apache.org/jira/browse/FLINK-3701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolaas Steenbergen updated FLINK-3701: Description: in version 1.0 this works: {code} Scala-Flink> var b = env.fromElements("a","b") Scala-Flink> b.print Scala-Flink> var c = env.fromElements("c","d") Scala-Flink> c.print {code} in the current master (after c.print) this leads to : {code} java.lang.NullPointerException at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:1031) at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:961) at org.apache.flink.api.java.ScalaShellRemoteEnvironment.execute(ScalaShellRemoteEnvironment.java:70) at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:855) at org.apache.flink.api.java.DataSet.collect(DataSet.java:410) at org.apache.flink.api.java.DataSet.print(DataSet.java:1605) at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1615) at .(:56) at .() at .(:7) at .() at $print() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:734) at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:983) at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:573) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:604) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:568) at scala.tools.nsc.interpreter.ILoop.reallyInterpret$1(ILoop.scala:760) at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:805) at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:717) at scala.tools.nsc.interpreter.ILoop.processLine$1(ILoop.scala:581) at scala.tools.nsc.interpreter.ILoop.innerLoop$1(ILoop.scala:588) at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:591) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:882) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837) at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:837) at org.apache.flink.api.scala.FlinkShell$.startShell(FlinkShell.scala:199) at org.apache.flink.api.scala.FlinkShell$.main(FlinkShell.scala:127) at org.apache.flink.api.scala.FlinkShell.main(FlinkShell.scala) {code} was: in version 1.0 this works: {code:scala} Scala-Flink> var b = env.fromElements("a","b") Scala-Flink> b.print Scala-Flink> var c = env.fromElements("c","d") Scala-Flink> c.print {code} in the current master (after c.print) this leads to : {code:scala} java.lang.NullPointerException at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:1031) at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:961) at org.apache.flink.api.java.ScalaShellRemoteEnvironment.execute(ScalaShellRemoteEnvironment.java:70) at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:855) at org.apache.flink.api.java.DataSet.collect(DataSet.java:410) at org.apache.flink.api.java.DataSet.print(DataSet.java:1605) at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1615) at .(:56) at .() at .(:7) at .() at $print() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:734) at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:983) at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:573) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:604) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:568) at scala.tools.nsc.interpreter.ILoop.reallyInterpret$1(ILoop.scala:760) at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:805) at
[jira] [Updated] (FLINK-3701) Cant call execute after first execution
[ https://issues.apache.org/jira/browse/FLINK-3701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolaas Steenbergen updated FLINK-3701: Description: in version 1.0 this works: {code:scala} Scala-Flink> var b = env.fromElements("a","b") Scala-Flink> b.print Scala-Flink> var c = env.fromElements("c","d") Scala-Flink> c.print {code} in the current master (after c.print) this leads to : {code:scala} java.lang.NullPointerException at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:1031) at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:961) at org.apache.flink.api.java.ScalaShellRemoteEnvironment.execute(ScalaShellRemoteEnvironment.java:70) at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:855) at org.apache.flink.api.java.DataSet.collect(DataSet.java:410) at org.apache.flink.api.java.DataSet.print(DataSet.java:1605) at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1615) at .(:56) at .() at .(:7) at .() at $print() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:734) at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:983) at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:573) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:604) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:568) at scala.tools.nsc.interpreter.ILoop.reallyInterpret$1(ILoop.scala:760) at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:805) at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:717) at scala.tools.nsc.interpreter.ILoop.processLine$1(ILoop.scala:581) at scala.tools.nsc.interpreter.ILoop.innerLoop$1(ILoop.scala:588) at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:591) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:882) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837) at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:837) at org.apache.flink.api.scala.FlinkShell$.startShell(FlinkShell.scala:199) at org.apache.flink.api.scala.FlinkShell$.main(FlinkShell.scala:127) at org.apache.flink.api.scala.FlinkShell.main(FlinkShell.scala) {code} was: in version 1.0 this works: ~~~scala Scala-Flink> var b = env.fromElements("a","b") Scala-Flink> b.print Scala-Flink> var c = env.fromElements("c","d") Scala-Flink> c.print ~~~ in the current master (after c.print) this leads to : ~~~ java.lang.NullPointerException at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:1031) at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:961) at org.apache.flink.api.java.ScalaShellRemoteEnvironment.execute(ScalaShellRemoteEnvironment.java:70) at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:855) at org.apache.flink.api.java.DataSet.collect(DataSet.java:410) at org.apache.flink.api.java.DataSet.print(DataSet.java:1605) at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1615) at .(:56) at .() at .(:7) at .() at $print() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:734) at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:983) at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:573) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:604) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:568) at scala.tools.nsc.interpreter.ILoop.reallyInterpret$1(ILoop.scala:760) at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:805) at
[jira] [Created] (FLINK-3701) Cant call execute after first execution
Nikolaas Steenbergen created FLINK-3701: --- Summary: Cant call execute after first execution Key: FLINK-3701 URL: https://issues.apache.org/jira/browse/FLINK-3701 Project: Flink Issue Type: Bug Components: Scala Shell Reporter: Nikolaas Steenbergen in version 1.0 this works: ~~~scala Scala-Flink> var b = env.fromElements("a","b") Scala-Flink> b.print Scala-Flink> var c = env.fromElements("c","d") Scala-Flink> c.print ~~~ in the current master (after c.print) this leads to : ~~~ java.lang.NullPointerException at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:1031) at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:961) at org.apache.flink.api.java.ScalaShellRemoteEnvironment.execute(ScalaShellRemoteEnvironment.java:70) at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:855) at org.apache.flink.api.java.DataSet.collect(DataSet.java:410) at org.apache.flink.api.java.DataSet.print(DataSet.java:1605) at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1615) at .(:56) at .() at .(:7) at .() at $print() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:734) at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:983) at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:573) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:604) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:568) at scala.tools.nsc.interpreter.ILoop.reallyInterpret$1(ILoop.scala:760) at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:805) at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:717) at scala.tools.nsc.interpreter.ILoop.processLine$1(ILoop.scala:581) at scala.tools.nsc.interpreter.ILoop.innerLoop$1(ILoop.scala:588) at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:591) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:882) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837) at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837) at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:837) at org.apache.flink.api.scala.FlinkShell$.startShell(FlinkShell.scala:199) at org.apache.flink.api.scala.FlinkShell$.main(FlinkShell.scala:127) at org.apache.flink.api.scala.FlinkShell.main(FlinkShell.scala) ~~~ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (FLINK-3687) org.apache.flink.runtime.net.ConnectionUtilsTest fails
[ https://issues.apache.org/jira/browse/FLINK-3687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolaas Steenbergen updated FLINK-3687: Summary: org.apache.flink.runtime.net.ConnectionUtilsTest fails (was: runtime.leaderelection.ZooKeeperLeaderElectionTest fails) > org.apache.flink.runtime.net.ConnectionUtilsTest fails > -- > > Key: FLINK-3687 > URL: https://issues.apache.org/jira/browse/FLINK-3687 > Project: Flink > Issue Type: Bug >Reporter: Nikolaas Steenbergen > > {code} > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.51 sec <<< > FAILURE! - in org.apache.flink.runtime.net.ConnectionUtilsTest > testReturnLocalHostAddressUsingHeuristics(org.apache.flink.runtime.net.ConnectionUtilsTest) > Time elapsed: 0.504 sec <<< FAILURE! > java.lang.AssertionError: > expected: > but was: > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:144) > at > org.apache.flink.runtime.net.ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics(ConnectionUtilsTest.java:59) > Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.554 sec - > in org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest > Results : > Failed tests: > ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics:59 > expected: > but was: > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-3687) org.apache.flink.runtime.net.ConnectionUtilsTest fails
[ https://issues.apache.org/jira/browse/FLINK-3687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221412#comment-15221412 ] Nikolaas Steenbergen commented on FLINK-3687: - you are right, my bad ;) > org.apache.flink.runtime.net.ConnectionUtilsTest fails > -- > > Key: FLINK-3687 > URL: https://issues.apache.org/jira/browse/FLINK-3687 > Project: Flink > Issue Type: Bug >Reporter: Nikolaas Steenbergen > > {code} > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.51 sec <<< > FAILURE! - in org.apache.flink.runtime.net.ConnectionUtilsTest > testReturnLocalHostAddressUsingHeuristics(org.apache.flink.runtime.net.ConnectionUtilsTest) > Time elapsed: 0.504 sec <<< FAILURE! > java.lang.AssertionError: > expected: > but was: > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:144) > at > org.apache.flink.runtime.net.ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics(ConnectionUtilsTest.java:59) > Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.554 sec - > in org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest > Results : > Failed tests: > ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics:59 > expected: > but was: > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (FLINK-3687) runtime.leaderelection.ZooKeeperLeaderElectionTest fails
[ https://issues.apache.org/jira/browse/FLINK-3687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolaas Steenbergen updated FLINK-3687: Description: {code} Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.51 sec <<< FAILURE! - in org.apache.flink.runtime.net.ConnectionUtilsTest testReturnLocalHostAddressUsingHeuristics(org.apache.flink.runtime.net.ConnectionUtilsTest) Time elapsed: 0.504 sec <<< FAILURE! java.lang.AssertionError: expected: but was: at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:144) at org.apache.flink.runtime.net.ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics(ConnectionUtilsTest.java:59) Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.554 sec - in org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest Results : Failed tests: ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics:59 expected: but was: {code} was: Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.51 sec <<< FAILURE! - in org.apache.flink.runtime.net.ConnectionUtilsTest testReturnLocalHostAddressUsingHeuristics(org.apache.flink.runtime.net.ConnectionUtilsTest) Time elapsed: 0.504 sec <<< FAILURE! java.lang.AssertionError: expected: but was: at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:144) at org.apache.flink.runtime.net.ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics(ConnectionUtilsTest.java:59) {code} Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.554 sec - in org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest Results : Failed tests: ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics:59 expected: but was: {code} > runtime.leaderelection.ZooKeeperLeaderElectionTest fails > > > Key: FLINK-3687 > URL: https://issues.apache.org/jira/browse/FLINK-3687 > Project: Flink > Issue Type: Bug >Reporter: Nikolaas Steenbergen > > {code} > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.51 sec <<< > FAILURE! - in org.apache.flink.runtime.net.ConnectionUtilsTest > testReturnLocalHostAddressUsingHeuristics(org.apache.flink.runtime.net.ConnectionUtilsTest) > Time elapsed: 0.504 sec <<< FAILURE! > java.lang.AssertionError: > expected: > but was: > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:144) > at > org.apache.flink.runtime.net.ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics(ConnectionUtilsTest.java:59) > Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.554 sec - > in org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest > Results : > Failed tests: > ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics:59 > expected: > but was: > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-3687) runtime.leaderelection.ZooKeeperLeaderElectionTest fails
Nikolaas Steenbergen created FLINK-3687: --- Summary: runtime.leaderelection.ZooKeeperLeaderElectionTest fails Key: FLINK-3687 URL: https://issues.apache.org/jira/browse/FLINK-3687 Project: Flink Issue Type: Bug Reporter: Nikolaas Steenbergen Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.51 sec <<< FAILURE! - in org.apache.flink.runtime.net.ConnectionUtilsTest testReturnLocalHostAddressUsingHeuristics(org.apache.flink.runtime.net.ConnectionUtilsTest) Time elapsed: 0.504 sec <<< FAILURE! java.lang.AssertionError: expected: but was: at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:144) at org.apache.flink.runtime.net.ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics(ConnectionUtilsTest.java:59) {code} Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.554 sec - in org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest Results : Failed tests: ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics:59 expected: but was: {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (FLINK-3686) resourcemanager.ClusterShutdownITCase sometimes fails
[ https://issues.apache.org/jira/browse/FLINK-3686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolaas Steenbergen updated FLINK-3686: Description: resourcemanager.ClusterShutdownITCase sometimes fails {code} Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 32.47 sec <<< FAILURE! - in org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase testClusterShutdown(org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase) Time elapsed: 30.063 sec <<< FAILURE! java.lang.AssertionError: assertion failed: timeout (29897828577 nanoseconds) during expectMsgClass waiting for class org.apache.flink.runtime.clusterframework.messages.StopClusterSuccessful at scala.Predef$.assert(Predef.scala:179) at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:423) at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:410) at akka.testkit.TestKit.expectMsgClass(TestKit.scala:718) at akka.testkit.JavaTestKit.expectMsgClass(JavaTestKit.java:397) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1$1.run(ClusterShutdownITCase.java:92) at akka.testkit.JavaTestKit$Within$1.apply(JavaTestKit.java:232) at akka.testkit.TestKitBase$class.within(TestKit.scala:296) at akka.testkit.TestKit.within(TestKit.scala:718) at akka.testkit.TestKitBase$class.within(TestKit.scala:310) at akka.testkit.TestKit.within(TestKit.scala:718) at akka.testkit.JavaTestKit$Within.(JavaTestKit.java:230) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1$1.(ClusterShutdownITCase.java:72) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1.(ClusterShutdownITCase.java:72) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase.testClusterShutdown(ClusterShutdownITCase.java:71) Results : Failed tests: ClusterShutdownITCase.testClusterShutdown:71 assertion failed: timeout (29897828577 nanoseconds) during expectMsgClass waiting for class org.apache.flink.runtime.clusterframework.messages.StopClusterSuccessful {code} was: resourcemanager.ClusterShutdownITCase sometimes fails ~~~Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 32.47 sec <<< FAILURE! - in org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase testClusterShutdown(org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase) Time elapsed: 30.063 sec <<< FAILURE! java.lang.AssertionError: assertion failed: timeout (29897828577 nanoseconds) during expectMsgClass waiting for class org.apache.flink.runtime.clusterframework.messages.StopClusterSuccessful at scala.Predef$.assert(Predef.scala:179) at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:423) at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:410) at akka.testkit.TestKit.expectMsgClass(TestKit.scala:718) at akka.testkit.JavaTestKit.expectMsgClass(JavaTestKit.java:397) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1$1.run(ClusterShutdownITCase.java:92) at akka.testkit.JavaTestKit$Within$1.apply(JavaTestKit.java:232) at akka.testkit.TestKitBase$class.within(TestKit.scala:296) at akka.testkit.TestKit.within(TestKit.scala:718) at akka.testkit.TestKitBase$class.within(TestKit.scala:310) at akka.testkit.TestKit.within(TestKit.scala:718) at akka.testkit.JavaTestKit$Within.(JavaTestKit.java:230) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1$1.(ClusterShutdownITCase.java:72) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1.(ClusterShutdownITCase.java:72) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase.testClusterShutdown(ClusterShutdownITCase.java:71) Results : Failed tests: ClusterShutdownITCase.testClusterShutdown:71 assertion failed: timeout (29897828577 nanoseconds) during expectMsgClass waiting for class org.apache.flink.runtime.clusterframework.messages.StopClusterSuccessful~~~ > resourcemanager.ClusterShutdownITCase sometimes fails > - > > Key: FLINK-3686 > URL: https://issues.apache.org/jira/browse/FLINK-3686 > Project: Flink > Issue Type: Bug > Components: Tests >Reporter: Nikolaas Steenbergen > > resourcemanager.ClusterShutdownITCase sometimes fails > {code} > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 32.47 sec <<< > FAILURE! - in org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase > testClusterShutdown(org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase) > Time elapsed: 30.063 sec <<< FAILURE! > java.lang.AssertionError: assertion failed: timeout (29897828577 nanoseconds) > during
[jira] [Updated] (FLINK-3686) resourcemanager.ClusterShutdownITCase sometimes fails
[ https://issues.apache.org/jira/browse/FLINK-3686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolaas Steenbergen updated FLINK-3686: Description: resourcemanager.ClusterShutdownITCase sometimes fails ~~~Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 32.47 sec <<< FAILURE! - in org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase testClusterShutdown(org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase) Time elapsed: 30.063 sec <<< FAILURE! java.lang.AssertionError: assertion failed: timeout (29897828577 nanoseconds) during expectMsgClass waiting for class org.apache.flink.runtime.clusterframework.messages.StopClusterSuccessful at scala.Predef$.assert(Predef.scala:179) at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:423) at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:410) at akka.testkit.TestKit.expectMsgClass(TestKit.scala:718) at akka.testkit.JavaTestKit.expectMsgClass(JavaTestKit.java:397) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1$1.run(ClusterShutdownITCase.java:92) at akka.testkit.JavaTestKit$Within$1.apply(JavaTestKit.java:232) at akka.testkit.TestKitBase$class.within(TestKit.scala:296) at akka.testkit.TestKit.within(TestKit.scala:718) at akka.testkit.TestKitBase$class.within(TestKit.scala:310) at akka.testkit.TestKit.within(TestKit.scala:718) at akka.testkit.JavaTestKit$Within.(JavaTestKit.java:230) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1$1.(ClusterShutdownITCase.java:72) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1.(ClusterShutdownITCase.java:72) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase.testClusterShutdown(ClusterShutdownITCase.java:71) Results : Failed tests: ClusterShutdownITCase.testClusterShutdown:71 assertion failed: timeout (29897828577 nanoseconds) during expectMsgClass waiting for class org.apache.flink.runtime.clusterframework.messages.StopClusterSuccessful~~~ was: resourcemanager.ClusterShutdownITCase sometimes fails ```Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 32.47 sec <<< FAILURE! - in org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase testClusterShutdown(org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase) Time elapsed: 30.063 sec <<< FAILURE! java.lang.AssertionError: assertion failed: timeout (29897828577 nanoseconds) during expectMsgClass waiting for class org.apache.flink.runtime.clusterframework.messages.StopClusterSuccessful at scala.Predef$.assert(Predef.scala:179) at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:423) at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:410) at akka.testkit.TestKit.expectMsgClass(TestKit.scala:718) at akka.testkit.JavaTestKit.expectMsgClass(JavaTestKit.java:397) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1$1.run(ClusterShutdownITCase.java:92) at akka.testkit.JavaTestKit$Within$1.apply(JavaTestKit.java:232) at akka.testkit.TestKitBase$class.within(TestKit.scala:296) at akka.testkit.TestKit.within(TestKit.scala:718) at akka.testkit.TestKitBase$class.within(TestKit.scala:310) at akka.testkit.TestKit.within(TestKit.scala:718) at akka.testkit.JavaTestKit$Within.(JavaTestKit.java:230) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1$1.(ClusterShutdownITCase.java:72) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1.(ClusterShutdownITCase.java:72) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase.testClusterShutdown(ClusterShutdownITCase.java:71) Results : Failed tests: ClusterShutdownITCase.testClusterShutdown:71 assertion failed: timeout (29897828577 nanoseconds) during expectMsgClass waiting for class org.apache.flink.runtime.clusterframework.messages.StopClusterSuccessful``` > resourcemanager.ClusterShutdownITCase sometimes fails > - > > Key: FLINK-3686 > URL: https://issues.apache.org/jira/browse/FLINK-3686 > Project: Flink > Issue Type: Bug > Components: Tests >Reporter: Nikolaas Steenbergen > > resourcemanager.ClusterShutdownITCase sometimes fails > ~~~Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 32.47 sec > <<< FAILURE! - in > org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase > testClusterShutdown(org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase) > Time elapsed: 30.063 sec <<< FAILURE! > java.lang.AssertionError: assertion failed: timeout (29897828577 nanoseconds) > during
[jira] [Created] (FLINK-3686) resourcemanager.ClusterShutdownITCase sometimes fails
Nikolaas Steenbergen created FLINK-3686: --- Summary: resourcemanager.ClusterShutdownITCase sometimes fails Key: FLINK-3686 URL: https://issues.apache.org/jira/browse/FLINK-3686 Project: Flink Issue Type: Bug Components: Tests Reporter: Nikolaas Steenbergen resourcemanager.ClusterShutdownITCase sometimes fails ```Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 32.47 sec <<< FAILURE! - in org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase testClusterShutdown(org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase) Time elapsed: 30.063 sec <<< FAILURE! java.lang.AssertionError: assertion failed: timeout (29897828577 nanoseconds) during expectMsgClass waiting for class org.apache.flink.runtime.clusterframework.messages.StopClusterSuccessful at scala.Predef$.assert(Predef.scala:179) at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:423) at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:410) at akka.testkit.TestKit.expectMsgClass(TestKit.scala:718) at akka.testkit.JavaTestKit.expectMsgClass(JavaTestKit.java:397) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1$1.run(ClusterShutdownITCase.java:92) at akka.testkit.JavaTestKit$Within$1.apply(JavaTestKit.java:232) at akka.testkit.TestKitBase$class.within(TestKit.scala:296) at akka.testkit.TestKit.within(TestKit.scala:718) at akka.testkit.TestKitBase$class.within(TestKit.scala:310) at akka.testkit.TestKit.within(TestKit.scala:718) at akka.testkit.JavaTestKit$Within.(JavaTestKit.java:230) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1$1.(ClusterShutdownITCase.java:72) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase$1.(ClusterShutdownITCase.java:72) at org.apache.flink.runtime.resourcemanager.ClusterShutdownITCase.testClusterShutdown(ClusterShutdownITCase.java:71) Results : Failed tests: ClusterShutdownITCase.testClusterShutdown:71 assertion failed: timeout (29897828577 nanoseconds) during expectMsgClass waiting for class org.apache.flink.runtime.clusterframework.messages.StopClusterSuccessful``` -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-2879) Links in documentation are broken
Nikolaas Steenbergen created FLINK-2879: --- Summary: Links in documentation are broken Key: FLINK-2879 URL: https://issues.apache.org/jira/browse/FLINK-2879 Project: Flink Issue Type: Bug Components: website Reporter: Nikolaas Steenbergen Priority: Minor https://ci.apache.org/projects/flink/flink-docs-master/internals/general_arch.html the image of system components link to wrong locations: e.g.: https://ci.apache.org/projects/flink/flink-docs-master/internals/internals/general_arch.html instead of: https://ci.apache.org/projects/flink/flink-docs-master/internals/general_arch.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2522) Integrate Streaming Api into Flink-scala-shell
[ https://issues.apache.org/jira/browse/FLINK-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959016#comment-14959016 ] Nikolaas Steenbergen commented on FLINK-2522: - I'm working on writing a test case for the scala shell streaming, where it should create a local Streaming environment (by itself). However if I run this test it gives me: {code} org.apache.flink.api.common.InvalidProgramException: The RemoteEnvironment cannot be used when submitting a program through a client, or running in a TestEnvironment context. at org.apache.flink.streaming.api.environment.RemoteStreamEnvironment.(RemoteStreamEnvironment.java:132) at org.apache.flink.streaming.api.environment.RemoteStreamEnvironment.(RemoteStreamEnvironment.java:103) at org.apache.flink.streaming.api.environment.RemoteStreamEnvironment.(RemoteStreamEnvironment.java:80) ... {code} Am I forced to just skip the test for the local streaming scala shell ? Or is there a workaround for this ? > Integrate Streaming Api into Flink-scala-shell > -- > > Key: FLINK-2522 > URL: https://issues.apache.org/jira/browse/FLINK-2522 > Project: Flink > Issue Type: Improvement > Components: Scala Shell >Reporter: Nikolaas Steenbergen >Assignee: Nikolaas Steenbergen > > startup scala shell with "-s" or "-streaming" flag to use the streaming api -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (FLINK-2613) Print usage information for Scala Shell
[ https://issues.apache.org/jira/browse/FLINK-2613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolaas Steenbergen reassigned FLINK-2613: --- Assignee: Nikolaas Steenbergen > Print usage information for Scala Shell > --- > > Key: FLINK-2613 > URL: https://issues.apache.org/jira/browse/FLINK-2613 > Project: Flink > Issue Type: Improvement > Components: Scala Shell >Affects Versions: 0.10 >Reporter: Maximilian Michels >Assignee: Nikolaas Steenbergen >Priority: Minor > Labels: starter > Fix For: 0.10 > > > The Scala Shell startup script starts a {{FlinkMiniCluster}} by default if > invoked with no arguments. > We should add a {{--help}} or {{-h}} option to make it easier for people to > find out how to configure remote execution. Alternatively, we could print a > notice on the local startup explaining how to start the shell in remote mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2613) Print usage information for Scala Shell
[ https://issues.apache.org/jira/browse/FLINK-2613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734731#comment-14734731 ] Nikolaas Steenbergen commented on FLINK-2613: - I've added some additional new-lines for the starting-connecting messages to make them more visible. I think [~mxm] is right, the user cant be unaware of the startup mode now that he has to explicitly type it. The other thing is that the FlinkILoop handles the startup message (squirrel), but is agnostic to the startup mode (it always connects through host and port), we would need to explicitly tell it the startup mode, just to print it below the squirrel, i think its nicer (codewise) the way it is now.. > Print usage information for Scala Shell > --- > > Key: FLINK-2613 > URL: https://issues.apache.org/jira/browse/FLINK-2613 > Project: Flink > Issue Type: Improvement > Components: Scala Shell >Affects Versions: 0.10 >Reporter: Maximilian Michels >Assignee: Nikolaas Steenbergen >Priority: Minor > Labels: starter > Fix For: 0.10 > > > The Scala Shell startup script starts a {{FlinkMiniCluster}} by default if > invoked with no arguments. > We should add a {{--help}} or {{-h}} option to make it easier for people to > find out how to configure remote execution. Alternatively, we could print a > notice on the local startup explaining how to start the shell in remote mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2613) Print usage information for Scala Shell
[ https://issues.apache.org/jira/browse/FLINK-2613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14733795#comment-14733795 ] Nikolaas Steenbergen commented on FLINK-2613: - so, I've rewritten the startup of the scala shell, now you start it with either : {code} $ ./bin/start-scala-shell.sh local{code} or {code} $ ./bin/start-scala-shell.sh remote {code} you can specify additional jar files with {{-a | -addclasspath}} you can get help with: {{-h | --help}} it will show following message: {code} Flink Scala Shell Usage: start-scala-shell.sh [local|remote] [options] ... -h | --help prints this usage text Command: local [options] starts Flink scala shell with a local Flink cluster -a | --addclasspath specifies additional jars to be used in Flink Command: remote [options] starts Flink scala shell connecting to a remote cluster remote host name as string remote port as integer -a | --addclasspath specifies additional jars to be used in Flink {code} when starting up correctly it says: {{Starting local Flink cluster (host: localhost, port 6123)}} {{Connecting to Flink cluster (host: xyz, port 6123)}} respectively. Any additional suggestions, or comments? > Print usage information for Scala Shell > --- > > Key: FLINK-2613 > URL: https://issues.apache.org/jira/browse/FLINK-2613 > Project: Flink > Issue Type: Improvement > Components: Scala Shell >Affects Versions: 0.10 >Reporter: Maximilian Michels >Priority: Minor > Labels: starter > Fix For: 0.10 > > > The Scala Shell startup script starts a {{FlinkMiniCluster}} by default if > invoked with no arguments. > We should add a {{--help}} or {{-h}} option to make it easier for people to > find out how to configure remote execution. Alternatively, we could print a > notice on the local startup explaining how to start the shell in remote mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2613) Print usage information for Scala Shell
[ https://issues.apache.org/jira/browse/FLINK-2613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730508#comment-14730508 ] Nikolaas Steenbergen commented on FLINK-2613: - sure, no problem > Print usage information for Scala Shell > --- > > Key: FLINK-2613 > URL: https://issues.apache.org/jira/browse/FLINK-2613 > Project: Flink > Issue Type: Improvement > Components: Scala Shell >Affects Versions: 0.10 >Reporter: Maximilian Michels >Priority: Minor > Labels: starter > Fix For: 0.10 > > > The Scala Shell startup script starts a {{FlinkMiniCluster}} by default if > invoked with no arguments. > We should add a {{--help}} or {{-h}} option to make it easier for people to > find out how to configure remote execution. Alternatively, we could print a > notice on the local startup explaining how to start the shell in remote mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2613) Print usage information for Scala Shell
[ https://issues.apache.org/jira/browse/FLINK-2613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14729346#comment-14729346 ] Nikolaas Steenbergen commented on FLINK-2613: - actually it does this as well already. either: {code} Connecting to remote server (host: localhost, port: 6123) {code} or {code} Creating new local server {code} > Print usage information for Scala Shell > --- > > Key: FLINK-2613 > URL: https://issues.apache.org/jira/browse/FLINK-2613 > Project: Flink > Issue Type: Improvement > Components: Scala Shell >Affects Versions: 0.10 >Reporter: Maximilian Michels >Priority: Minor > Labels: starter > Fix For: 0.10 > > > The Scala Shell startup script starts a {{FlinkMiniCluster}} by default if > invoked with no arguments. > We should add a {{--help}} or {{-h}} option to make it easier for people to > find out how to configure remote execution. Alternatively, we could print a > notice on the local startup explaining how to start the shell in remote mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2613) Print usage information for Scala Shell
[ https://issues.apache.org/jira/browse/FLINK-2613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728819#comment-14728819 ] Nikolaas Steenbergen commented on FLINK-2613: - There is a {{--help}} option: {code} Flink Scala Shell Usage: start-scala-shell.sh [options] -p | --port port specifies port of running JobManager -h | --host host specifies host name of running JobManager --help prints this usage text {code} I could add something like: {code} The Scala-Shell starts a local mini cluster by default, to connect to a remote cluster please specify --port and --host {code} Admittedly it should also work with {{-h}}, which it does not right now. > Print usage information for Scala Shell > --- > > Key: FLINK-2613 > URL: https://issues.apache.org/jira/browse/FLINK-2613 > Project: Flink > Issue Type: Improvement > Components: Scala Shell >Affects Versions: 0.10 >Reporter: Maximilian Michels >Priority: Minor > Labels: starter > Fix For: 0.10 > > > The Scala Shell startup script starts a {{FlinkMiniCluster}} by default if > invoked with no arguments. > We should add a {{--help}} or {{-h}} option to make it easier for people to > find out how to configure remote execution. Alternatively, we could print a > notice on the local startup explaining how to start the shell in remote mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2522) Integrate Streaming Api into Flink-scala-shell
[ https://issues.apache.org/jira/browse/FLINK-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14711058#comment-14711058 ] Nikolaas Steenbergen commented on FLINK-2522: - Hey Márton, I've started working on the scala-shell extension in: https://github.com/nikste/flink/tree/Flink-2522_Scala_shell_streaming A collection function would be nice ;) What about the one in flink-streaming-contrib, is there anything speaking against implementing this in scala? Integrate Streaming Api into Flink-scala-shell -- Key: FLINK-2522 URL: https://issues.apache.org/jira/browse/FLINK-2522 Project: Flink Issue Type: Improvement Components: Scala Shell Reporter: Nikolaas Steenbergen Assignee: Nikolaas Steenbergen startup scala shell with -s or -streaming flag to use the streaming api -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-2522) Integrate Streaming Api into Flink-scala-shell
Nikolaas Steenbergen created FLINK-2522: --- Summary: Integrate Streaming Api into Flink-scala-shell Key: FLINK-2522 URL: https://issues.apache.org/jira/browse/FLINK-2522 Project: Flink Issue Type: Improvement Components: Scala Shell Reporter: Nikolaas Steenbergen Assignee: Nikolaas Steenbergen startup scala shell with -s or -streaming flag to use the streaming api -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (FLINK-2161) Flink Scala Shell does not support external jars (e.g. Gelly, FlinkML)
[ https://issues.apache.org/jira/browse/FLINK-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolaas Steenbergen reassigned FLINK-2161: --- Assignee: Nikolaas Steenbergen Flink Scala Shell does not support external jars (e.g. Gelly, FlinkML) -- Key: FLINK-2161 URL: https://issues.apache.org/jira/browse/FLINK-2161 Project: Flink Issue Type: Improvement Reporter: Till Rohrmann Assignee: Nikolaas Steenbergen Currently, there is no easy way to load and ship external libraries/jars with Flink's Scala shell. Assume that you want to run some Gelly graph algorithms from within the Scala shell, then you have to put the Gelly jar manually in the lib directory and make sure that this jar is also available on your cluster, because it is not shipped with the user code. It would be good to have a simple mechanism how to specify additional jars upon startup of the Scala shell. These jars should then also be shipped to the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2161) Flink Scala Shell does not support external jars (e.g. Gelly, FlinkML)
[ https://issues.apache.org/jira/browse/FLINK-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14576958#comment-14576958 ] Nikolaas Steenbergen commented on FLINK-2161: - Unfortunately runtime adding of jars (for 2.10) is broken see: https://issues.scala-lang.org/browse/SI-6502 I'm editing the bashscript to load it at startup Flink Scala Shell does not support external jars (e.g. Gelly, FlinkML) -- Key: FLINK-2161 URL: https://issues.apache.org/jira/browse/FLINK-2161 Project: Flink Issue Type: Improvement Reporter: Till Rohrmann Assignee: Nikolaas Steenbergen Currently, there is no easy way to load and ship external libraries/jars with Flink's Scala shell. Assume that you want to run some Gelly graph algorithms from within the Scala shell, then you have to put the Gelly jar manually in the lib directory and make sure that this jar is also available on your cluster, because it is not shipped with the user code. It would be good to have a simple mechanism how to specify additional jars upon startup of the Scala shell. These jars should then also be shipped to the cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-2094) Implement Word2Vec
Nikolaas Steenbergen created FLINK-2094: --- Summary: Implement Word2Vec Key: FLINK-2094 URL: https://issues.apache.org/jira/browse/FLINK-2094 Project: Flink Issue Type: Improvement Components: Machine Learning Library Reporter: Nikolaas Steenbergen Assignee: Nikolaas Steenbergen Priority: Minor implement Word2Vec http://arxiv.org/pdf/1402.3722v1.pdf -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-1907) Scala Interactive Shell
[ https://issues.apache.org/jira/browse/FLINK-1907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14507306#comment-14507306 ] Nikolaas Steenbergen commented on FLINK-1907: - I've added a class to start up the shell from within flink (org.apache.flink.api.scala.FlinkShell), So it can be started from the Flink jar directly. Simply writing out the compiled lines of the wordcount example and then putting them in a jar (jar cf output.jar inputDir ) and uploading the jar to a local cluster (sh start-local.sh) via the webclient leads to: org.apache.flink.client.program.ProgramInvocationException: Neither a 'Main-Class', nor a 'program-class' entry was found in the jar file. [..] Do you have a suggestion what the easiest way is to put the single compiled shell commands in a format that is executable? Scala Interactive Shell --- Key: FLINK-1907 URL: https://issues.apache.org/jira/browse/FLINK-1907 Project: Flink Issue Type: New Feature Components: Scala API Reporter: Nikolaas Steenbergen Assignee: Nikolaas Steenbergen Priority: Minor Build an interactive Shell for the Scala api. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-1907) Scala Interactive Shell
[ https://issues.apache.org/jira/browse/FLINK-1907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14503349#comment-14503349 ] Nikolaas Steenbergen commented on FLINK-1907: - Hey, I don't know if im doing this correctly. I've cloned and branched the flink repository and added a wrapper for the scala ILoop: [https://github.com/nikste/flink/tree/Scala_shell] You can put in a custom welcome message and promt (which is certainly nice). It can write out the compiled code from user input via scala-shell writeFlinkVD to the /tmp/ directory. It will write a bunch of directories (line0,line1 etc.) with a bunch of classes read.class and eval.class. Initially I thought you could compile a jar out of it and send it somehow to flink to process it. However I am not so sure anymore if this can solve the ClassNotFoundException. Scala Interactive Shell --- Key: FLINK-1907 URL: https://issues.apache.org/jira/browse/FLINK-1907 Project: Flink Issue Type: New Feature Components: Scala API Reporter: Nikolaas Steenbergen Assignee: Nikolaas Steenbergen Priority: Minor Build an interactive Shell for the Scala api. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-1907) Scala Interactive Shell
Nikolaas Steenbergen created FLINK-1907: --- Summary: Scala Interactive Shell Key: FLINK-1907 URL: https://issues.apache.org/jira/browse/FLINK-1907 Project: Flink Issue Type: New Feature Components: Scala API Reporter: Nikolaas Steenbergen Priority: Minor Build an interactive Shell for the Scala api. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-1832) start-local.bat/start-local.sh does not work if there is a white space in the file path (windows)
Nikolaas Steenbergen created FLINK-1832: --- Summary: start-local.bat/start-local.sh does not work if there is a white space in the file path (windows) Key: FLINK-1832 URL: https://issues.apache.org/jira/browse/FLINK-1832 Project: Flink Issue Type: Bug Components: Local Runtime Affects Versions: 0.8.1 Environment: Windows 8.1 Reporter: Nikolaas Steenbergen Priority: Minor start-local.bat can be run, but exits immediately. start-local.sh with cygwin exits with error: owner_@owner /cygdrive/e/Program Files/flink-0.8.1-bin-hadoop2/flink-0.8.1-bin-hadoop2/flink-0.8.1/bin $ ./start-local.sh cygpath: error converting Files/flink-0.8.1-bin-hadoop2/flink-0.8.1-bin-hadoop2/flink-0.8.1/bin/.. - No such file or directory sed: can't read E:\Program/conf/flink-conf.yaml: No such file or directory sed: can't read E:\Program/conf/flink-conf.yaml: No such file or directory sed: can't read E:\Program/conf/flink-conf.yaml: No such file or directory sed: can't read E:\Program/conf/flink-conf.yaml: No such file or directory sed: can't read E:\Program/conf/flink-conf.yaml: No such file or directory sed: can't read E:\Program/conf/flink-conf.yaml: No such file or directory sed: can't read E:\Program/conf/flink-conf.yaml: No such file or directory cygpath: error converting /cygdrive/e/Program:Files/flink-0.8.1-bin-hadoop2/flink-0.8.1-bin-hadoop2/flink-0.8.1/bin/../lib/*.jar:. - Unknown error -1 ./start-local.sh: line 27: E:\Program/bin/jobmanager.sh: No such file or directory -- This message was sent by Atlassian JIRA (v6.3.4#6332)