Re: Class loading issues when using Remote Execution Environment

2018-04-30 Thread kedar mhaswade
Chesnay,

I have filed https://issues.apache.org/jira/browse/FLINK-9267 to keep track
of this issue.

Regards,
Kedar

On Fri, Apr 27, 2018 at 11:50 AM, kedar mhaswade <kedar.mhasw...@gmail.com>
wrote:

> Thanks again!
>
> This is strange. With both Flink 1.3.3 and Flink 1.6.0-SNAPSHOT and
> 1) copying gradoop-demo-shaded.jar to /lib, and
> 2) using RemoteEnvironment with just jmHost and jmPort (no Jarfiles)
>
> I get the same exception [1], caused by:
> *Caused by: com.typesafe.config.ConfigException$Missing: No configuration
> setting found for key 'akka.remote.log-received-messages'.*
>
> This key is not documented anywhere, so I am confused. Also, copying with
> above, also JM and TM are running, the Flink dashboard on
> http://localhost:8081 is *unavailable*!
>
> With Flink 1.3.3 and Flink 1.6.0-SNAPSHOT
> 1) NOT copying gradoop-shaded.jar in /lib, and
> 2) using RemoteEnvironment with jmHost, jmPort and jarFiles =
> {}
>
> I get the same exception, however the Flink dashboard on
> http://localhost:8081 is *available*! This makes me believe that this is
> somehow an insidious classloading issue :(.
> I am really perplexed with this behavior. Let me stick to Flink 1.3.3
> installation as you suggested for now.
>
> If you have any other debugging tips, please let me know. But I am running
> out of ideas to make it run with non-Local Environment.
>
> Regards,
> Kedar
>
>
>
>
> [1] Gradoop shaded jar in /lib -- exception on the web-app:
> org.apache.flink.client.program.ProgramInvocationException: Could not
> start the ActorSystem needed to talk to the JobManager.
> at org.apache.flink.client.program.ClusterClient.run(
> ClusterClient.java:461)
> at org.apache.flink.client.program.StandaloneClusterClient.submitJob(
> StandaloneClusterClient.java:105)
> at org.apache.flink.client.program.ClusterClient.run(
> ClusterClient.java:442)
> at org.apache.flink.client.program.ClusterClient.run(
> ClusterClient.java:429)
> at org.apache.flink.client.program.ClusterClient.run(
> ClusterClient.java:404)
> at org.apache.flink.client.RemoteExecutor.executePlanWithJars(
> RemoteExecutor.java:211)
> at org.apache.flink.client.RemoteExecutor.executePlan(
> RemoteExecutor.java:188)
> at org.apache.flink.api.java.RemoteEnvironment.execute(
> RemoteEnvironment.java:172)
> at org.apache.flink.api.java.ExecutionEnvironment.execute(
> ExecutionEnvironment.java:926)
> at org.gradoop.demo.server.RequestHandler.getResponse(
> RequestHandler.java:447)
> at org.gradoop.demo.server.RequestHandler.createResponse(
> RequestHandler.java:430)
> at org.gradoop.demo.server.RequestHandler.executeCypher(
> RequestHandler.java:121)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(
> JavaMethodInvokerFactory.java:60)
> at com.sun.jersey.server.impl.model.method.dispatch.
> AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(
> AbstractResourceMethodDispatchProvider.java:205)
> at com.sun.jersey.server.impl.model.method.dispatch.
> ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.
> java:75)
> at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.
> accept(HttpMethodRule.java:302)
> at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.
> accept(RightHandPathRule.java:147)
> at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.
> accept(ResourceClassRule.java:108)
> at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.
> accept(RightHandPathRule.java:147)
> at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(
> RootResourceClassesRule.java:84)
> at com.sun.jersey.server.impl.application.WebApplicationImpl._
> handleRequest(WebApplicationImpl.java:1542)
> at com.sun.jersey.server.impl.application.WebApplicationImpl._
> handleRequest(WebApplicationImpl.java:1473)
> at com.sun.jersey.server.impl.application.WebApplicationImpl.
> handleRequest(WebApplicationImpl.java:1419)
> at com.sun.jersey.server.impl.application.WebApplicationImpl.
> handleRequest(WebApplicationImpl.java:1409)
> at com.sun.jersey.server.impl.container.grizzly2.
> GrizzlyContainer._service(GrizzlyContainer.java:222)
> at com.sun.jersey.server.impl.container.grizzly2.GrizzlyContainer.service(
> GrizzlyContainer.java:192)
> at org.glassfish.grizzly.http.server.HttpHandler.doHandle(
> HttpHandler.java:164)
> at org.glassfish.grizzly.http.server.HttpHandlerChain.
> se

Re: Class loading issues when using Remote Execution Environment

2018-04-27 Thread kedar mhaswade
)
at
org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:815)
at
org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at
org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:115)
at
org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:55)
at
org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:135)
at
org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:567)
at
org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:547)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.util.FlinkException: Could not start the
ActorSystem lazily.
at
org.apache.flink.client.program.ClusterClient$LazyActorSystemLoader.get(ClusterClient.java:230)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:459)
... 47 more
*Caused by: com.typesafe.config.ConfigException$Missing: No configuration
setting found for key 'akka.remote.log-received-messages'*
at com.typesafe.config.impl.SimpleConfig.findKey(SimpleConfig.java:124)
at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:145)
at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:151)
at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:151)
at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:159)
at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:164)
at com.typesafe.config.impl.SimpleConfig.getBoolean(SimpleConfig.java:174)
at akka.remote.RemoteSettings.(RemoteSettings.scala:24)
at
akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:114)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78)
at scala.util.Try$.apply(Try.scala:192)
at
akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73)
at
akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:84)
at
akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:84)
at scala.util.Success.flatMap(Try.scala:231)
at
akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:84)
at akka.actor.ActorSystemImpl.liftedTree1$1(ActorSystem.scala:585)
at akka.actor.ActorSystemImpl.(ActorSystem.scala:578)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:142)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:119)
at akka.actor.ActorSystem$.create(ActorSystem.scala:67)
at
org.apache.flink.runtime.akka.AkkaUtils$.createActorSystem(AkkaUtils.scala:104)
at
org.apache.flink.runtime.akka.AkkaUtils$.createActorSystem(AkkaUtils.scala:92)
at
org.apache.flink.runtime.akka.AkkaUtils.createActorSystem(AkkaUtils.scala)
at
org.apache.flink.client.program.ClusterClient$LazyActorSystemLoader.get(ClusterClient.java:226)


On Thu, Apr 26, 2018 at 11:52 PM, Chesnay Schepler <ches...@apache.org>
wrote:

> First, a small correction for my previous mail:
>
> I could reproduce your problems locally when submitting the fat-jar.
> Turns out i never submitted the far-jar, as i didn't pass the jar file
> argument to RemoteEnvironment.
>
> Now on to your questions:
>
> *What version of Flink are you trying with?*
> I got it working *once *with 1.6-SNAPSHOT, but i would recommend sticking
> with 1.3.1 since that is the version gradoop depends on. (i haven't tried
> it with this version yet, but that's the next thing on my list)
>
>
> *Are there other config changes (flink-conf.yaml) that you made in your
> cluster? *It was the standard config.
>
>
> *Is org.apache.flink.api.common.io.FileOutputFormat a good alternative to
> LocalCollectionOutputFormat? *It can be used, but if the result is small
> you could also use accumulators.
>
>
> *Do you think it is better to use jarFiles argument on
> createRemoteEnvironment? *Yes, once we get it working this is the way to
> go.
>
>
> On 26.04.2018 18:42, kedar mhaswade wrote:
>
> Thanks Chesnay for your incredible help!
>
> I will try out the suggestions again. A few questions:
> - What version of Flink are you trying with? I have had issues when I
> placed the gradoop-demo-shaded.jar in the lib folder on Flink
> installation (1.4 even refused to start!).
> - Are there other config changes (flink-conf.yaml) that you made in your
> cluster?
> - Is org.apache.flink.api.common.io.FileOutputFormat a good alternative
> to LocalCollectionOutputFormat, or should I use
&

Re: Setting the parallelism in a cluster of machines properly

2018-04-26 Thread kedar mhaswade
On Thu, Apr 26, 2018 at 10:47 AM, Makis Pap  wrote:

> OK Michael!
>
> I will look into it and will come back at you! Thanks for the help. I
> agree that it is quite suspicious the par = 8
>
> Jps? Meaning?
>
jps is a tool that comes with JDK (see $JAVA_HOME/bin). This is modeled
after the POSIX command ps (process status); jps = java ps => it shows you
the JVM's running on the given computer.


>
> Oh I should mention that the JobManager node is also a TaskManager.
>
> Best,
> Max
>
> On 27 Apr 2018, at 01:39, TechnoMage  wrote:
>
> Check that you have slaves and masters set correctly on all machines, and
> in particular the one submitting jobs.  Make sure that from the machine
> submitting the job that it is talking to the correct job manager
> (jobmanager.rpc.address).  It really sounds like you are some how
> submitting jobs to only one taskmanager.
>
> You should also use jps to verify that you only have one jobmanager
> running and the worker machines only have taskmanager running.
>
> Michael
>
> On Apr 26, 2018, at 11:34 AM, Makis Pap  wrote:
>
> So what should the correct configs be then?
>
> I have set numOfSlotsPerTaskManager = 8. Which is reasonable as each has 8
> cpus.
>
> Best,
> Makis
>
> On Fri, 27 Apr 2018, 01:26 TechnoMage,  wrote:
>
>> You need to verify your configs are correct.  Check that the local
>> machine sees all the task managers, that is the most likely reason it will
>> reject a higher parallelism.  I use a java program to submit to a 3 node 18
>> slot cluster without issue on a job with 18 parallelism.  I have not used
>> the command line to do this however.
>>
>> Michael
>>
>> > On Apr 26, 2018, at 11:16 AM, m@xi  wrote:
>> >
>> > No man. I have 17 TaskManagers and each has a number of 8 slots.
>> >
>> > Do you think it is better to have 8 TaskManager (1 slot each) ?
>> >
>> > Best,
>> > Max
>> >
>> >
>> >
>> > --
>> > Sent from: http://apache-flink-user-mailing-list-archive.2336050.
>> n4.nabble.com/
>>
>>
>
>


Re: Class loading issues when using Remote Execution Environment

2018-04-26 Thread kedar mhaswade
Thanks Chesnay for your incredible help!

I will try out the suggestions again. A few questions:
- What version of Flink are you trying with? I have had issues when I
placed the gradoop-demo-shaded.jar in the lib folder on Flink installation
(1.4 even refused to start!).
- Are there other config changes (flink-conf.yaml) that you made in your
cluster?
- Is org.apache.flink.api.common.io.FileOutputFormat a good alternative to
LocalCollectionOutputFormat, or should I use HadoopOutputFormatCommonBase
(I do want to run the cluster on YARN later; at the moment I am trying on a
standalone cluster).
- Do you think it is better to use jarFiles argument on
createRemoteEnvironment (which deploys the JAR only for this job and not
mess with the entire Flink cluster) a better option than placing the JAR(s)
in the lib folder?

Thanks again,
Regards,
Kedar


On Thu, Apr 26, 2018 at 3:14 AM, Chesnay Schepler <ches...@apache.org>
wrote:

> Small update:
>
> I could reproduce your problems locally when submitting the fat-jar.
> I could get the job to run after placing the gradoop-demo-shaded.jar into
> the lib folder.
> I have not tried yet placing only the gradoop jars into lib (but my guess
> is you missed a gradoop jar)
>
> Note that the job fails to run since you use "LocalCollectionOutputFormat"
> which can only be used for local execution, i.e. when the job submission
> and execution happen in the same JVM.
>
>
> On 25.04.2018 14:23, kedar mhaswade wrote:
>
> Thank you for your response!
>
> I have not tried the flink run app.jar route because the way the app is
> set up does not allow me to do it. Basically, the app is a web application
> which serves the UI and also submits a Flink job for running Cypher
> queries. It is a proof-of-concept app, but IMO, a very useful one.
>
> Here's how you can reproduce:
> 1) git clone g...@github.com:kedarmhaswade/gradoop_demo.git (this is my
> fork of gradoop_demo)
> 2) cd gradoop_demo
> 3) git checkout dev => dev is the branch where my changes to make gradoop
> work with remote environment go.
> 4) mvn clean package => should bring the gradoop JARs that this app needs;
> these JARs should then be placed in /lib.
> 5) cp 
> ~/.m2/repository/org/gradoop/gradoop-common/0.3.2/gradoop-common-0.3.2.jar
> /lib, cp ~/.m2/repository/org/gradoop/
> gradoop-flink/0.3.2/gradoop-flink-0.3.2.jar /lib,
> cp target/gradoop-demo-0.2.0.jar /lib.
> 6) start the local flink cluster (I have tried with latest
> (built-from-source) 1.6-SNAPSHOT, or 1.4) /bin/start-cluster.sh
> -- note the JM host and port
> 7) /start.sh --jmhost  --jmport 6123 (adjust host and
> port per your cluster) => this is now configured to talk to the
> RemoteEnvironment at given JM host and port.
> 8) open a browser at: http://localhost:2342/gradoop/html/cypher.html
> 9) hit the query button => this would throw the exception
> 10) Ctrl C the process in 7 and just restart it as java -cp target/
> classes:target/gradoop-demo-shaded.jar org.gradoop.demo.server.Server =>
> starts LocalEnvironment
> 11) do 9 again and see the results shown nicely in the browser.
>
> Here is the relevant code:
> 1) Choosing between
> <https://github.com/kedarmhaswade/gradoop_demo/blob/dev/src/main/java/org/gradoop/demo/server/Server.java#L107>
> a Remote or a Local Environment.
>
> The instructions are correct to my knowledge. Thanks for your willingness
> to try. I have tried everything I can. With different Flink versions, I get
> different results (I have also tried on 1.6-SNAPSHOT with class loading
> config being parent-first, or child-first).
>
> Regards,
> Kedar
>
>
> On Wed, Apr 25, 2018 at 1:08 AM, Chesnay Schepler <ches...@apache.org>
> wrote:
>
>> I couldn't spot any error in what you tried to do. Does the
>> job-submission succeed if you submit the jar through the command-line
>> client?
>>
>> Can you share the project, or a minimal reproducing version?
>>
>>
>> On 25.04.2018 00:41, kedar mhaswade wrote:
>>
>> I am trying to get gradoop_demo
>> <https://github.com/dbs-leipzig/gradoop_demo> (a gradoop based graph
>> visualization app) working on Flink with *Remote* Execution Environment.
>>
>> This app, which is based on Gradoop, submits a job to the *preconfigured*
>> execution environment, collects the results and sends it to the UI for
>> rendering.
>>
>> When the execution environment is configured to be a LocalEnvironment
>> <https://ci.apache.org/projects/flink/flink-docs-stable/api/java/org/apache/flink/api/java/LocalEnvironment.html>,
>> everything works fine. But when I start a cluster (using <
>> flink-install-path>/bin/start-cluster.sh), get t

Re: Class loading issues when using Remote Execution Environment

2018-04-25 Thread kedar mhaswade
Thank you for your response!

I have not tried the flink run app.jar route because the way the app is set
up does not allow me to do it. Basically, the app is a web application
which serves the UI and also submits a Flink job for running Cypher
queries. It is a proof-of-concept app, but IMO, a very useful one.

Here's how you can reproduce:
1) git clone g...@github.com:kedarmhaswade/gradoop_demo.git (this is my fork
of gradoop_demo)
2) cd gradoop_demo
3) git checkout dev => dev is the branch where my changes to make gradoop
work with remote environment go.
4) mvn clean package => should bring the gradoop JARs that this app needs;
these JARs should then be placed in /lib.
5) cp
~/.m2/repository/org/gradoop/gradoop-common/0.3.2/gradoop-common-0.3.2.jar
/lib, cp
~/.m2/repository/org/gradoop/gradoop-flink/0.3.2/gradoop-flink-0.3.2.jar
/lib, cp target/gradoop-demo-0.2.0.jar /lib.
6) start the local flink cluster (I have tried with latest
(built-from-source) 1.6-SNAPSHOT, or 1.4)
/bin/start-cluster.sh -- note the JM host and port
7) /start.sh --jmhost  --jmport 6123 (adjust host and
port per your cluster) => this is now configured to talk to the
RemoteEnvironment at given JM host and port.
8) open a browser at: http://localhost:2342/gradoop/html/cypher.html
9) hit the query button => this would throw the exception
10) Ctrl C the process in 7 and just restart it as java -cp
target/classes:target/gradoop-demo-shaded.jar
org.gradoop.demo.server.Server => starts LocalEnvironment
11) do 9 again and see the results shown nicely in the browser.

Here is the relevant code:
1) Choosing between
<https://github.com/kedarmhaswade/gradoop_demo/blob/dev/src/main/java/org/gradoop/demo/server/Server.java#L107>
a Remote or a Local Environment.

The instructions are correct to my knowledge. Thanks for your willingness
to try. I have tried everything I can. With different Flink versions, I get
different results (I have also tried on 1.6-SNAPSHOT with class loading
config being parent-first, or child-first).

Regards,
Kedar


On Wed, Apr 25, 2018 at 1:08 AM, Chesnay Schepler <ches...@apache.org>
wrote:

> I couldn't spot any error in what you tried to do. Does the job-submission
> succeed if you submit the jar through the command-line client?
>
> Can you share the project, or a minimal reproducing version?
>
>
> On 25.04.2018 00:41, kedar mhaswade wrote:
>
> I am trying to get gradoop_demo
> <https://github.com/dbs-leipzig/gradoop_demo> (a gradoop based graph
> visualization app) working on Flink with *Remote* Execution Environment.
>
> This app, which is based on Gradoop, submits a job to the *preconfigured*
> execution environment, collects the results and sends it to the UI for
> rendering.
>
> When the execution environment is configured to be a LocalEnvironment
> <https://ci.apache.org/projects/flink/flink-docs-stable/api/java/org/apache/flink/api/java/LocalEnvironment.html>,
> everything works fine. But when I start a cluster (using <
> flink-install-path>/bin/start-cluster.sh), get the Job Manager endpoint
> (e.g. localhost:6123) and configure a RemoteEnvironment
> <https://ci.apache.org/projects/flink/flink-docs-stable/api/java/org/apache/flink/api/java/ExecutionEnvironment.html#createRemoteEnvironment-java.lang.String-int-org.apache.flink.configuration.Configuration-java.lang.String...->
>  and
> use that environment to run the job, I get exceptions [1].
>
> Based on the class loading doc
> <https://ci.apache.org/projects/flink/flink-docs-master/monitoring/debugging_classloading.html>,
> I copied the gradoop classes (gradoop-flink-0.3.3-SNAPSHOT.
> jar, gradoop-common-0.3.3-SNAPSHOT.jar) to the /lib
> folder (hoping that that way those classes will be available to all the
> executors in the cluster). I have ensured that the class that Flink fails
> to load is in fact available in the Gradoop jars that I copied to the /lib
> folder.
>
> I have tried using the RemoteEnvironment method with jarFiles argument
> where the passed JAR file is a fat jar containing everything (in which case
> there is no Gradoop JAR file in /lib folder).
>
> So, my questions are:
> 1) How can I use RemoteEnvironment?
> 2) Is there any other way of doing this *programmatically? *(That means I
> can't do flink run since I am interested in the job execution result as a
> blocking call -- which means ideally I don't want to use the submit RESTful
> API as well). I just want RemoteEnvironment to work as well as
> LocalEnvironment.
>
> Regards,
> Kedar
>
>
> [1]
> 2018-04-24 15:16:02,823 ERROR org.apache.flink.runtime.jobmanager.JobManager
>   - Failed to submit job 0c987c8704f8b7eb4d7d38efcb3d708d
> (Flink Java Job at Tue Apr 24 15:15:59 PDT 2018)
> java.lang.NoClassDefFoundError: Cou

Class loading issues when using Remote Execution Environment

2018-04-24 Thread kedar mhaswade
I am trying to get gradoop_demo
 (a gradoop based graph
visualization app) working on Flink with *Remote* Execution Environment.

This app, which is based on Gradoop, submits a job to the *preconfigured*
execution environment, collects the results and sends it to the UI for
rendering.

When the execution environment is configured to be a LocalEnvironment
,
everything works fine. But when I start a cluster (using <
flink-install-path>/bin/start-cluster.sh), get the Job Manager endpoint
(e.g. localhost:6123) and configure a RemoteEnvironment

and
use that environment to run the job, I get exceptions [1].

Based on the class loading doc
,
I copied the gradoop classes (
gradoop-flink-0.3.3-SNAPSHOT.jar, gradoop-common-0.3.3-SNAPSHOT.jar) to the
/lib folder (hoping that that way those classes will be
available to all the executors in the cluster). I have ensured that the
class that Flink fails to load is in fact available in the Gradoop jars
that I copied to the /lib folder.

I have tried using the RemoteEnvironment method with jarFiles argument
where the passed JAR file is a fat jar containing everything (in which case
there is no Gradoop JAR file in /lib folder).

So, my questions are:
1) How can I use RemoteEnvironment?
2) Is there any other way of doing this *programmatically? *(That means I
can't do flink run since I am interested in the job execution result as a
blocking call -- which means ideally I don't want to use the submit RESTful
API as well). I just want RemoteEnvironment to work as well as
LocalEnvironment.

Regards,
Kedar


[1]
2018-04-24 15:16:02,823 ERROR
org.apache.flink.runtime.jobmanager.JobManager- Failed to
submit job 0c987c8704f8b7eb4d7d38efcb3d708d (Flink Java Job at Tue Apr 24
15:15:59 PDT 2018)
java.lang.NoClassDefFoundError: Could not initialize class
*org.gradoop.common.model.impl.id.GradoopId*
  at java.io.ObjectStreamClass.hasStaticInitializer(Native Method)
  at
java.io.ObjectStreamClass.computeDefaultSUID(ObjectStreamClass.java:1887)
  at java.io.ObjectStreamClass.access$100(ObjectStreamClass.java:79)
  at java.io.ObjectStreamClass$1.run(ObjectStreamClass.java:263)
  at java.io.ObjectStreamClass$1.run(ObjectStreamClass.java:261)
  at java.security.AccessController.doPrivileged(Native Method)
  at
java.io.ObjectStreamClass.getSerialVersionUID(ObjectStreamClass.java:260)
  at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:682)
  at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1876)
  at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1745)
  at java.io.ObjectInputStream.readClass(ObjectInputStream.java:1710)
  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1550)
  at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
  at java.util.HashSet.readObject(HashSet.java:341)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1158)
  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
  at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
  at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278)
  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202)
  at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
  at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
  at
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:290)


Re: Flink 1.4.2 in Zeppelin Notebook

2018-04-09 Thread kedar mhaswade
Hmm. What error do you see on the Zeppelin console when you click the run
(flink code) button after making these changes for flink interpreter config
(I assume you restart the interpreter)?

Regards,
Kedar

On Mon, Apr 9, 2018 at 12:50 AM, Rico Bergmann <i...@ricobergmann.de> wrote:

> Hi.
>
> Thanks for your reply. But this also didn’t work for me.
>
> In the JM log I get an akka Error („dropping message for non-local
> recipient“).
>
> My setup: I have Flink running on Kubernetes cluster, version 1.4.2.
> zeppelin is version 0.8 using the flink interpreter compiled against flink
> 1.1.3.
> When submitting a job with the CLI tool everything is working fine. The
> CLI tool is version 1.4.2 ...
>
> Any other suggestions?
>
> Thanks a lot.
> Best,
> Rico.
>
>
> Am 06.04.2018 um 18:44 schrieb kedar mhaswade <kedar.mhasw...@gmail.com>:
>
> Yes. You need to add the two properties for the job manager (I agree, it
> is confusing because the properties named "host" and "port" already
> available, but the names of the useful properties are different):
>
> Could you please try this and let us know if it works for you?
>
> Regards,
> Kedar
>
>
> On Fri, Apr 6, 2018 at 5:51 AM, Dipl.-Inf. Rico Bergmann <
> i...@ricobergmann.de> wrote:
>
>> Hi!
>>
>> Has someone successfully integrated Flink 1.4.2 into Zeppelin notebook
>> (using Flink in cluster mode, not local mode)?
>>
>> Best,
>>
>> Rico.
>>
>>
>


Re: Flink 1.4.2 in Zeppelin Notebook

2018-04-06 Thread kedar mhaswade
Yes. You need to add the two properties for the job manager (I agree, it is
confusing because the properties named "host" and "port" already available,
but the names of the useful properties are different):

Could you please try this and let us know if it works for you?

Regards,
Kedar


On Fri, Apr 6, 2018 at 5:51 AM, Dipl.-Inf. Rico Bergmann <
i...@ricobergmann.de> wrote:

> Hi!
>
> Has someone successfully integrated Flink 1.4.2 into Zeppelin notebook
> (using Flink in cluster mode, not local mode)?
>
> Best,
>
> Rico.
>
>


Re: Flink TaskManager and JobManager internals

2018-03-28 Thread kedar mhaswade
On Wed, Mar 28, 2018 at 3:14 AM, Niclas Hedhman  wrote:

> Hi,
>
> is there some document (or presentation) that explains the internals of
> how a Job gets deployed on to the cluster? Communications, Classloading and
> Serialization (if any) are the key points here I think.
>

I don't know of any specific presentations, but data artisans provide
http://training.data-artisans.com/system-overview.html which are pretty
good.
The Flink documentation is comprehensive.
Class-loading:
https://ci.apache.org/projects/flink/flink-docs-master/monitoring/debugging_classloading.html
State serialization:
https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/state/custom_serialization.html


>
> I suspect that my application modeling framework is incompatible with the
> standard Flink mechanism, and I would like to learn how much effort there
> is to make my own mechanism (assuming it is possible, since Yarn and Mesos
> are in similar situation)
>

Don't know what you mean by application "modeling" framework, but if you
mean that you have a Flink app (batch or streaming) that you'd want to
deploy to YARN (or Mesos, which is similar), then the flow appears to be
1- Create a "Flink Cluster" (also called a YARN session) when a user does
"bin/yarn-session.sh " and then
2- Run the app when the user does "bin/flink run  ".

It's the user's responsibility to shut down the cluster (YARN session) by
sending a "stop" command to the YARN session created in 1). The code
appears to be in classes like
org.apache.flink.yarn.cli.FlinkYarnSessionCli (manage the YARN session)
and org.apache.flink.client.CliFrontend (submit a Flink app to the YARN
session).

Regards,
Kedar


>
> Thanks in Advance
> --
> Niclas Hedhman, Software Developer
> http://zest.apache.org - New Energy for Java
>


Programmatic creation of YARN sessions and deployment (running) Flink jobs on it.

2018-03-26 Thread kedar mhaswade
Typically, when one wants to run a Flink job on a Hadoop YARN installation,
one creates a Yarn session (e.g. ./bin/yarn-session.sh -n 4 -qu
test-yarn-queue) and runs intended Flink job(s) (e.g. ./bin/flink run -c
MyFlinkApp -m job-manager-host:job-manager-port  myapp.jar) on the Flink cluster whose job manager URL is returned
by the previous command.

My questions are:
- Does yarn-session.sh need conf/flink-conf.yaml to be available in Flink
installation on every container in YARN? If this file is needed, how can
one run different YARN sessions (with potentially very different
configurations) on the same Hadoop YARN installation simultaneously?
- Is it possible to start the YARN session programmatically? If yes, I
believe I should look at classes like YarnClusterClient
.
Is that right? Is there any other guidance on how to do this
programmatically (e.g. I have a management UI that wants to start/stop YARN
sessions and deploy Flink jobs to it)?

Regards,
Kedar


Re: [DISCUSS] Inverted (child-first) class loading

2018-03-12 Thread kedar mhaswade
Many thanks Aljoscha! I am sorry I missed this section.

Regards,
Kedar

On Mon, Mar 12, 2018 at 9:16 AM, Aljoscha Krettek <aljos...@apache.org>
wrote:

> Hi Kedar,
>
> There is this section in the Flink docs: https://ci.apache.org/
> projects/flink/flink-docs-master/monitoring/debugging_classloading.html
>
> Best,
> Aljoscha
>
>
> On 10. Mar 2018, at 05:53, kedar mhaswade <kedar.mhasw...@gmail.com>
> wrote:
>
> This is an interesting question and it usually has consequences that are
> far-reaching in user experience.
>
> If a Flink app is supposed to be a "standalone app" that any Flink
> installation should be able to run, then the child-first classloading makes
> sense. This is how we build many of the Java application servers (e.g.
> GlassFish, JBoss etc). Doing this makes the application "self-contained"
> and perhaps portable. Of course, this increases the size of the Jar. The
> one issue to watch out for is application using framework classes that are
> newer than framework itself. For instance, should I expect my app with
> Flink *1.6* DataSet/DataStream classes to run smoothly on a Flink 1.5
> installation?
>
> If a Flink app depends on a particular (version of the) Flink
> installation, then, if using parent-first classloading, the app can make
> use of the classes that the installation itself uses. This makes the app
> (comparatively) less self-contained, but this limits the size of the app's
> Jar. There are advantages of doing this, but it poses problems especially
> in upgrades.
>
> Whether one or the other should be the behavior largely depends on how the
> applications are built, tested, and deployed. Application's build comes
> into picture because in tools like Maven a dependency can be declared to be
> "provided" which means if you know that your app's dependency is also your
> framework's (i.e. Flink) dependency and you, as an app developer, are okay
> with that Maven wouldn't bundle it in your app's Jar.
>
> So, my recommendation is that since this appears like a backward
> incompatible change, Flink should provide an option to go back to
> parent-first classloading for a given app, at least for 1.5. Child-first
> classloading seems like the right thing to do given how (unnecessarily)
> complicated the deployments have become and given how frequently apps use
> library versions that are different from the framework.
>
> ElasticSearch solution has merits too, but it is unclear if it helps *at
> deployment time* merely to identify that there is a duplicate (without
> knowing where it has come from). Ideally, when people build the so-called
> shadow Jar (one Jar with all dependencies) the build script should warn of
> the duplicates. Shadow Jars alleviate (but do not remove) the problems of
> "Jar Hell". But it seems to me that till we move to a modular Java (that is
> Java 9; I think this is way out in future), this is the preferred solution.
>
> That said, I'd really like to see a classloading section in Flink docs
> (somewhere in dev/best_practices.html). Is a JIRA in order?
>
> Regards,
> Kedar
>
> On Fri, Mar 9, 2018 at 1:52 PM, Stephan Ewen <ewenstep...@gmail.com>
> wrote:
>
>> @Ken very interesting thought.
>>
>> One for have three options:
>>   - forbid duplicate classes
>>   - parent first conflict resolution
>>   - child first conflict resolution
>>
>> Having number one as the default and let the error message suggest
>> options two and three as options would definitely make users aware of the
>> issue...
>>
>> On Fri, Mar 9, 2018, 21:09 Ken Krugler <kkrugler_li...@transpac.com>
>> wrote:
>>
>>> I can’t believe I’m suggesting this, but perhaps the Elasticsearch
>>> “Hammer of Thor” (aka “jar hell”) approach would be appropriate here.
>>>
>>> Basically they prevent a program from running if there are duplicate
>>> classes on the classpath.
>>>
>>> This causes headaches when you really need a different version of
>>> library X, and that’s already on the class path.
>>>
>>> See https://github.com/elastic/elasticsearch/issues/14348 for an
>>> example of the issues it can cause.
>>>
>>> But it definitely catches a lot of oops-ish mistakes in building the
>>> jars, and makes debugging easier (they print out “class X jar1: >> jar> jar2: ”).
>>>
>>> Caused by: java.lang.IllegalStateException: jar hell!
>>> class: jdk.packager.services.UserJvmOptionsService
>>> jar1: 
>>> /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/lib/ant-javafx.jar
>>> j

Running the executables from Flink distribution of the source build

2018-03-11 Thread kedar mhaswade
Flink gurus!

I have built Flink from source. I find that the executables are all at:

/flink-dist/target/flink-1.6-SNAPSHOT-bin/flink-1.6-SNAPSHOT.

However, when I try to run start-scala-shell.sh local from the bin
subfolder of this folder, it does not seem to run the simple wordcount
example. Am I doing it right? How else do people test out the distribution
of the Flink sources that the have just built?

Regards,
Kedar

PS- A tentative thread dump on JobManager shows that many threads are just
waiting on some condition variable. Appears like a livelock. The task
manager has one main thread which is also waiting.


Re: [DISCUSS] Inverted (child-first) class loading

2018-03-10 Thread kedar mhaswade
This is an interesting question and it usually has consequences that are
far-reaching in user experience.

If a Flink app is supposed to be a "standalone app" that any Flink
installation should be able to run, then the child-first classloading makes
sense. This is how we build many of the Java application servers (e.g.
GlassFish, JBoss etc). Doing this makes the application "self-contained"
and perhaps portable. Of course, this increases the size of the Jar. The
one issue to watch out for is application using framework classes that are
newer than framework itself. For instance, should I expect my app with
Flink *1.6* DataSet/DataStream classes to run smoothly on a Flink 1.5
installation?

If a Flink app depends on a particular (version of the) Flink installation,
then, if using parent-first classloading, the app can make use of the
classes that the installation itself uses. This makes the app
(comparatively) less self-contained, but this limits the size of the app's
Jar. There are advantages of doing this, but it poses problems especially
in upgrades.

Whether one or the other should be the behavior largely depends on how the
applications are built, tested, and deployed. Application's build comes
into picture because in tools like Maven a dependency can be declared to be
"provided" which means if you know that your app's dependency is also your
framework's (i.e. Flink) dependency and you, as an app developer, are okay
with that Maven wouldn't bundle it in your app's Jar.

So, my recommendation is that since this appears like a backward
incompatible change, Flink should provide an option to go back to
parent-first classloading for a given app, at least for 1.5. Child-first
classloading seems like the right thing to do given how (unnecessarily)
complicated the deployments have become and given how frequently apps use
library versions that are different from the framework.

ElasticSearch solution has merits too, but it is unclear if it helps *at
deployment time* merely to identify that there is a duplicate (without
knowing where it has come from). Ideally, when people build the so-called
shadow Jar (one Jar with all dependencies) the build script should warn of
the duplicates. Shadow Jars alleviate (but do not remove) the problems of
"Jar Hell". But it seems to me that till we move to a modular Java (that is
Java 9; I think this is way out in future), this is the preferred solution.

That said, I'd really like to see a classloading section in Flink docs
(somewhere in dev/best_practices.html). Is a JIRA in order?

Regards,
Kedar

On Fri, Mar 9, 2018 at 1:52 PM, Stephan Ewen  wrote:

> @Ken very interesting thought.
>
> One for have three options:
>   - forbid duplicate classes
>   - parent first conflict resolution
>   - child first conflict resolution
>
> Having number one as the default and let the error message suggest options
> two and three as options would definitely make users aware of the issue...
>
> On Fri, Mar 9, 2018, 21:09 Ken Krugler 
> wrote:
>
>> I can’t believe I’m suggesting this, but perhaps the Elasticsearch
>> “Hammer of Thor” (aka “jar hell”) approach would be appropriate here.
>>
>> Basically they prevent a program from running if there are duplicate
>> classes on the classpath.
>>
>> This causes headaches when you really need a different version of library
>> X, and that’s already on the class path.
>>
>> See https://github.com/elastic/elasticsearch/issues/14348 for an example
>> of the issues it can cause.
>>
>> But it definitely catches a lot of oops-ish mistakes in building the
>> jars, and makes debugging easier (they print out “class X jar1: > jar> jar2: ”).
>>
>> Caused by: java.lang.IllegalStateException: jar hell!
>> class: jdk.packager.services.UserJvmOptionsService
>> jar1: 
>> /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/lib/ant-javafx.jar
>> jar2: 
>> /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/lib/packager.jar
>>
>> — Ken
>>
>>
>> On Mar 9, 2018, at 3:21 AM, Stephan Ewen  wrote:
>>
>> Hi all!
>>
>> Flink 1.4 introduces child-first classloading by default, for the
>> application libraries.
>>
>> We added that, because it allows applications to use different versions
>> of many libraries, compared to what Flink uses in its core, or compared to
>> what other dependencies (like Hadoop) pull into the class path.
>>
>> For example, applications can use different versions of akka, Avro,
>> Protobuf, etc. Compared to what Flink / Hadoop / etc. uses.
>>
>> Now, while that is nice, child-first classloading runs into trouble when
>> the application jars are not properly built, meaning when the application
>> JAR contains libraries that it should not (because they are already in the
>> classpath / lib folder).
>>
>> For example, when the class path has the Kafka Connector (connector is in
>> the lib directory) and the application jar also contains Kafka, the we get
>> nasty 

Re: Job is be cancelled, but the stdout log still prints

2018-03-08 Thread kedar mhaswade
Also, in addition to what Gary said, if you take Flink completely out of
picture and wrote a simple Java class with a main method and the static
block (!) which does some long running task like getLiveInfo(), then
chances are that your class will make the JVM hang!

Basically what you are doing is start a bunch of threads (which are perhaps
non-daemon by default) and leave them running. Since there is at least one
non-daemon thread that is running, the JVM is not allowed to shut down,
causing the hang.

Regards,
Kedar


On Thu, Mar 8, 2018 at 3:15 AM, Gary Yao  wrote:

> Hi,
>
> You are not shutting down the ScheduledExecutorService [1], which means
> that
> after job cancelation the thread will continue running getLiveInfo(). The
> user
> code class loader, and your classes won't be garbage collected. You should
> use
> the RichFunction#close callback to shutdown your thread pool [2].
>
> Best,
> Gary
>
> [1] https://stackoverflow.com/questions/10504172/how-to-
> shutdown-an-executorservice
> [2] https://ci.apache.org/projects/flink/flink-docs-
> release-1.4/dev/api_concepts.html#rich-functions
>
>
> On Thu, Mar 8, 2018 at 3:11 AM, sundy <543950...@qq.com> wrote:
>
>>
>> Hi:
>>
>> I faced a problem, the taskmanagers in 3 nodes are still running, I make
>> sure that all job are cancelled,  but I could see that stdout logs are
>> still printing all the way. The job's parallelism is 6.
>>
>> I wrote a scheduled pool like this
>>
>> static {
>>   Executors.newScheduledThreadPool(1).scheduleAtFixedRate(new Runnable() {
>> @Override
>> public void run() {
>>   try {
>> getLiveInfo();
>>   } catch (Exception e) {
>> e.printStackTrace();
>>   }
>> }
>>   }, 0, 60, TimeUnit.SECONDS);
>> }
>>
>> Is that the static methods will still be running in the taskmanagers even
>> if the job is cancelled? That’s weird.
>>
>
>


CliFrontend hang in the local case?

2018-02-26 Thread kedar mhaswade
I am seeing a hang where the main thread of CliFrontend goes to timed
waiting. This appears like a livelock. My local setup is simple: A job
manager, a task manager on MacOS. My algorithm is based on Gelly's vertex
centric computation. The resultant graph's vertex  count is about 4
million. I am printing the vertices to a file using

resultVertices.writeAsCsv(save, FileSystem.WriteMode.OVERWRITE);

and that seems to write the csv alright, however my invocation of flink run
appears hung because of the timed waiting of the main thread in
CliFrontend. My config in flink.yaml is simple:

akka.client.timeout: 60 min
#taskmanager.heap.mb: 16384
#jobmanager.heap.mb: 8192
# 25 Feb
taskmanager.numberOfTaskSlots: 4
taskmanager.heap.mb: 5
jobmanager.heap.mb: 2

The log does show:

02/25/2018 18:06:34 DataSink (CsvOutputFormat (path: file:/tmp/nis,
delimiter: ,))(1/1) switched to FINISHED

The main thread's dump is here:
"main" #1 prio=5 os_prio=31 tid=0x7ffa6c801800 nid=0x2703 waiting on
condition [0x7ffdb000]
   java.lang.Thread.State: TIMED_WAITING (parking)
  at sun.misc.Unsafe.park(Native Method)
  - parking to wait for  <0x00076bb82568> (a
scala.concurrent.impl.Promise$CompletionLatch)
  at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
  at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
  at
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
  at
scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:212)
  at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222)
  at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:157)
  at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169)
  at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169)
  at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
  at scala.concurrent.Await$.ready(package.scala:169)
  at scala.concurrent.Await.ready(package.scala)
  at
org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:266)
  at
org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:387)
  at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:481)
  at
org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:105)
  at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:456)
  at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:444)
  at
org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
  at
org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:815)
  at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
  at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
  at
StatefulVertexCentricInfluenceScore.main(StatefulVertexCentricInfluenceScore.java:143)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:525)
  at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:417)
  at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:396)
  at
org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:802)
  at org.apache.flink.client.CliFrontend.run(CliFrontend.java:282)
  at
org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1054)
  at org.apache.flink.client.CliFrontend$1.call(CliFrontend.java:1101)
  at org.apache.flink.client.CliFrontend$1.call(CliFrontend.java:1098)
  at
org.apache.flink.runtime.security.HadoopSecurityContext$$Lambda$6/812553708.run(Unknown
Source)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
  at
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
  at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1098)

StatefulVertexCentricInfluenceScore.main(
StatefulVertexCentricInfluenceScore.java:143) is the above line in Java
source code.

The DataSink(Collect) on dashboard shows that it is still running [1]! I
have been able to submit another smaller job which runs to completion.
There is nothing suspicious in task manager or job manager log or thread
dumps.

Any idea what is going on?

Regards,
Kedar

The entire thread dump of the CliFrontend JVM (that appears hung) via
jstack is:
# taken on 26 Feb 2018
# 2018-02-26 09:33:19
Full thread dump Java HotSpot(TM)