Re: JobTimeoutException: Lost connection to JobManager

2015-04-15 Thread Maximilian Michels
The exception indicates that you're still using the old version. It takes
some time for the new Maven artifact to get deployed to the snapshot
repository. Apparently, a artifact has already been deployed this morning.
Did you delete the jar files in your .m2 folder?

On Wed, Apr 15, 2015 at 1:38 PM, Mohamed Nadjib MAMI m...@iai.uni-bonn.de
wrote:

  Hello,

 I'm still facing the problem with 0.9-SNAPSHOT version. Tried to remove
 the libraries and download them again but same issue.

 Greetings,
 Mohamed


 Exception in thread main
 org.apache.flink.runtime.client.JobTimeoutException: Lost connection to
 JobManager
 at
 org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:164)
 at
 org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:198)
 at
 org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:188)
 at
 org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:179)
 at
 org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:54)
 at Main.main(Main.java:142)
 Caused by: java.util.concurrent.TimeoutException: Futures timed out after
 [10 milliseconds]
 at
 scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
 at
 scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
 at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
 at
 scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
 at scala.concurrent.Await$.result(package.scala:107)
 at scala.concurrent.Await.result(package.scala)
 at
 org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:143)
 ... 5 more


 On 15.04.2015 01:02, Stephan Ewen wrote:

 I pushed a fix to the master. The problem should now be gone.

  Please let us know if you experience other issues!

  Greetings,
 Stephan


 On Tue, Apr 14, 2015 at 9:57 PM, Mohamed Nadjib MAMI m...@iai.uni-bonn.de
  wrote:

  Hello,

 Once I got the message, few seconds, I received your email. Well, this
 just to cast a need for a fix.

 Happy to feel the dynamism of the work. Great work.


 On 14.04.2015 21:50, Stephan Ewen wrote:

 You are on the latest snapshot version? I think there is an inconsistency
 in there. Will try to fix that toning.

 Can you actually use the milestone1 version? That one should be good.

 Greetings,
 Stephan
  Am 14.04.2015 20:31 schrieb Fotis P fotis...@gmail.com:

Hello everyone,

  I am getting this weird exception while running some simple counting
 jobs in Flink.

 Exception in thread main
 org.apache.flink.runtime.client.JobTimeoutException: Lost connection to
 JobManager
 at
 org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:164)
 at
 org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:198)
 at
 org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:188)
 at
 org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:179)
 at
 org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:54)
 at
 trackers.preprocessing.ExtractInfoFromLogs.main(ExtractInfoFromLogs.java:133)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at
 com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
 Caused by: java.util.concurrent.TimeoutException: Futures timed out
 after [10 milliseconds]
 at
 scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
 at
 scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
 at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
 at
 scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
 at scala.concurrent.Await$.result(package.scala:107)
 at scala.concurrent.Await.result(package.scala)
 at
 org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:143)
 ... 10 more


  The only call above which comes from my code is
 ExtractInfoFromLogs.java:133 which is the environment.execute() method.

  This exception comes when dealing with largish files (10GB). No
 exception is thrown when I am working with a smaller subset of my data.
  Also I would swear that it was working fine until a few days ago, and
 the code has not been changed :S Only change was a re-import of maven
 dependencies.

  I am unsure what other information I could provide that would help you
 help me :)

  I am running everything locally through the intelij IDE. Maven
 dependency is set to 0.9-SNAPSHOT.
  I have an 8-core Ubuntu 14.04 machine.

  Thanks in advance :D


Re: JobTimeoutException: Lost connection to JobManager

2015-04-15 Thread Ufuk Celebi
On 15 Apr 2015, at 14:18, Maximilian Michels m...@apache.org wrote:

 The exception indicates that you're still using the old version. It takes 
 some time for the new Maven artifact to get deployed to the snapshot 
 repository. Apparently, a artifact has already been deployed this morning. 
 Did you delete the jar files in your .m2 folder?

I think that's what he meant.

The problem is that the snapshot repositories take some time to synchronize.

Please
1. git clone https://github.com/apache/flink.git
2. cd flink
3. mvn clean install -DskipTests

This way you build Flink yourself and are guaranteed to work on a version with 
the fix.

Sorry for the inconvenience. Does this solve it?

– Ufuk

Re: JobTimeoutException: Lost connection to JobManager

2015-04-14 Thread Mohamed Nadjib MAMI

Hello,

Once I got the message, few seconds, I received your email. Well, this 
just to cast a need for a fix.


Happy to feel the dynamism of the work. Great work.

On 14.04.2015 21:50, Stephan Ewen wrote:


You are on the latest snapshot version? I think there is an 
inconsistency in there. Will try to fix that toning.


Can you actually use the milestone1 version? That one should be good.

Greetings,
Stephan

Am 14.04.2015 20:31 schrieb Fotis P fotis...@gmail.com 
mailto:fotis...@gmail.com:


Hello everyone,

I am getting this weird exception while running some simple
counting jobs in Flink.

Exception in thread main
org.apache.flink.runtime.client.JobTimeoutException: Lost
connection to JobManager
at

org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:164)
at

org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:198)
at

org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:188)
at
org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:179)
at
org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:54)
at

trackers.preprocessing.ExtractInfoFromLogs.main(ExtractInfoFromLogs.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Caused by: java.util.concurrent.TimeoutException: Futures timed
out after [10 milliseconds]
at
scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at
scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at

scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at scala.concurrent.Await.result(package.scala)
at

org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:143)
... 10 more


The only call above which comes from my code is
ExtractInfoFromLogs.java:133 which is the environment.execute()
method.

This exception comes when dealing with largish files (10GB). No
exception is thrown when I am working with a smaller subset of my
data.
Also I would swear that it was working fine until a few days ago,
and the code has not been changed :S Only change was a re-import
of maven dependencies.

I am unsure what other information I could provide that would help
you help me :)

I am running everything locally through the intelij IDE. Maven
dependency is set to 0.9-SNAPSHOT.
I have an 8-core Ubuntu 14.04 machine.

Thanks in advance :D



--
Regards, Grüße, Cordialement, Recuerdos, Saluti, προσρήσεις, 问候, 
تحياتي. Mohamed Nadjib Mami

PhD Student - EIS Department - Bonn University, Germany.
About me! http://www.strikingly.com/mohamed-nadjib-mami
LinkedIn


Re: JobTimeoutException: Lost connection to JobManager

2015-04-14 Thread Stephan Ewen
You are on the latest snapshot version? I think there is an inconsistency
in there. Will try to fix that toning.

Can you actually use the milestone1 version? That one should be good.

Greetings,
Stephan
 Am 14.04.2015 20:31 schrieb Fotis P fotis...@gmail.com:

 Hello everyone,

 I am getting this weird exception while running some simple counting jobs
 in Flink.

 Exception in thread main
 org.apache.flink.runtime.client.JobTimeoutException: Lost connection to
 JobManager
 at
 org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:164)
 at
 org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:198)
 at
 org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:188)
 at
 org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:179)
 at
 org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:54)
 at
 trackers.preprocessing.ExtractInfoFromLogs.main(ExtractInfoFromLogs.java:133)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
 Caused by: java.util.concurrent.TimeoutException: Futures timed out after
 [10 milliseconds]
 at
 scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
 at
 scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
 at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
 at
 scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
 at scala.concurrent.Await$.result(package.scala:107)
 at scala.concurrent.Await.result(package.scala)
 at
 org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:143)
 ... 10 more


 The only call above which comes from my code is
 ExtractInfoFromLogs.java:133 which is the environment.execute() method.

 This exception comes when dealing with largish files (10GB). No exception
 is thrown when I am working with a smaller subset of my data.
 Also I would swear that it was working fine until a few days ago, and the
 code has not been changed :S Only change was a re-import of maven
 dependencies.

 I am unsure what other information I could provide that would help you
 help me :)

 I am running everything locally through the intelij IDE. Maven dependency
 is set to 0.9-SNAPSHOT.
 I have an 8-core Ubuntu 14.04 machine.

 Thanks in advance :D



Re: JobTimeoutException: Lost connection to JobManager

2015-04-14 Thread Stephan Ewen
I pushed a fix to the master. The problem should now be gone.

Please let us know if you experience other issues!

Greetings,
Stephan


On Tue, Apr 14, 2015 at 9:57 PM, Mohamed Nadjib MAMI m...@iai.uni-bonn.de
wrote:

  Hello,

 Once I got the message, few seconds, I received your email. Well, this
 just to cast a need for a fix.

 Happy to feel the dynamism of the work. Great work.


 On 14.04.2015 21:50, Stephan Ewen wrote:

 You are on the latest snapshot version? I think there is an inconsistency
 in there. Will try to fix that toning.

 Can you actually use the milestone1 version? That one should be good.

 Greetings,
 Stephan
  Am 14.04.2015 20:31 schrieb Fotis P fotis...@gmail.com:

Hello everyone,

  I am getting this weird exception while running some simple counting
 jobs in Flink.

 Exception in thread main
 org.apache.flink.runtime.client.JobTimeoutException: Lost connection to
 JobManager
 at
 org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:164)
 at
 org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:198)
 at
 org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:188)
 at
 org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:179)
 at
 org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:54)
 at
 trackers.preprocessing.ExtractInfoFromLogs.main(ExtractInfoFromLogs.java:133)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at
 com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
 Caused by: java.util.concurrent.TimeoutException: Futures timed out after
 [10 milliseconds]
 at
 scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
 at
 scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
 at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
 at
 scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
 at scala.concurrent.Await$.result(package.scala:107)
 at scala.concurrent.Await.result(package.scala)
 at
 org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:143)
 ... 10 more


  The only call above which comes from my code is
 ExtractInfoFromLogs.java:133 which is the environment.execute() method.

  This exception comes when dealing with largish files (10GB). No
 exception is thrown when I am working with a smaller subset of my data.
  Also I would swear that it was working fine until a few days ago, and
 the code has not been changed :S Only change was a re-import of maven
 dependencies.

  I am unsure what other information I could provide that would help you
 help me :)

  I am running everything locally through the intelij IDE. Maven
 dependency is set to 0.9-SNAPSHOT.
  I have an 8-core Ubuntu 14.04 machine.

  Thanks in advance :D


 --
 Regards, Grüße, Cordialement, Recuerdos, Saluti, προσρήσεις, 问候, تحياتي.
 Mohamed Nadjib Mami
 PhD Student - EIS Department - Bonn University, Germany.
 About me! http://www.strikingly.com/mohamed-nadjib-mami
 LinkedIn