Can you please try 0.5.0 release version (
http://livy.incubator.apache.org/download/). I assume 0.5.0 should have
fixed this concurrent issue.

Thanks
Jerry

2018-02-23 14:42 GMT+08:00 Sudha KS <sudha...@fuzzylogix.com>:

> Hi all,
>
> Anybody facing this error?
> I get error running example code (PiJob) compiled on Scala 2.11, java 1.8,
> Livy 0.4.0-SNAPSHOT from the repo
> *http://repo.hortonworks.com/content/repositories/releases*
> <http://repo.hortonworks.com/content/repositories/releases>,  environment:
> HDP 2.6.4
>
>
> Uploading livy jobs jar to the SparkContext...
> [WARNING]
> java.util.concurrent.ExecutionException: java.io.IOException: Internal
> Server Error: "java.util.concurrent.ExecutionException:
> java.lang.RuntimeException: py4j.Py4JException: Error while obtaining a new
> communication channel\npy4j.CallbackClient.getConnectionLock(
> CallbackClient.java:218)\npy4j.CallbackClient.sendCommand(CallbackClient.
> java:337)\npy4j.CallbackClient.sendCommand(CallbackClient.java:316)\
> npy4j.reflection.PythonProxyHandler.invoke(PythonProxyHandler.java:103)\
> ncom.sun.proxy.$Proxy24.getLocalTmpDirPath(Unknown
> Source)\norg.apache.livy.repl.PythonInterpreter.addPyFile(
> PythonInterpreter.scala:264)\norg.apache.livy.repl.ReplDriver$$anonfun$
> addJarOrPyFile$1.apply(ReplDriver.scala:110)\norg.
> apache.livy.repl.ReplDriver$$anonfun$addJarOrPyFile$1.
> apply(ReplDriver.scala:110)\nscala.Option.foreach(Option.
> scala:257)\norg.apache.livy.repl.ReplDriver.addJarOrPyFile(ReplDriver.
> scala:110)\norg.apache.livy.rsc.driver.JobContextImpl.
> addJarOrPyFile(JobContextImpl.java:100)\norg.apache.livy.
> rsc.driver.AddJarJob.call(AddJarJob.java:39)\norg.apache.livy.rsc.driver.
> JobWrapper.call(JobWrapper.java:57)\norg.apache.livy.rsc.
> driver.JobWrapper.call(JobWrapper.java:34)\njava.
> util.concurrent.FutureTask.run(FutureTask.java:266)\njava.util.concurrent.
> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\
> njava.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)\njava.lang.Thread.run(Thread.java:745)"
>         at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>         at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>         at callers.SparkLivyRun.main(SparkLivyRun.java:31)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.codehaus.mojo.exec.ExecJavaMojo$1.run(
> ExecJavaMojo.java:282)
>         at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: Internal Server Error:
> "java.util.concurrent.ExecutionException: java.lang.RuntimeException:
> py4j.Py4JException: Error while obtaining a new communication
> channel\npy4j.CallbackClient.getConnectionLock(CallbackClient.java:218)\
> npy4j.CallbackClient.sendCommand(CallbackClient.java:337)\npy4j.
> CallbackClient.sendCommand(CallbackClient.java:316)\npy4j.reflection.
> PythonProxyHandler.invoke(PythonProxyHandler.java:103)\
> ncom.sun.proxy.$Proxy24.getLocalTmpDirPath(Unknown
> Source)\norg.apache.livy.repl.PythonInterpreter.addPyFile(
> PythonInterpreter.scala:264)\norg.apache.livy.repl.ReplDriver$$anonfun$
> addJarOrPyFile$1.apply(ReplDriver.scala:110)\norg.
> apache.livy.repl.ReplDriver$$anonfun$addJarOrPyFile$1.
> apply(ReplDriver.scala:110)\nscala.Option.foreach(Option.
> scala:257)\norg.apache.livy.repl.ReplDriver.addJarOrPyFile(ReplDriver.
> scala:110)\norg.apache.livy.rsc.driver.JobContextImpl.
> addJarOrPyFile(JobContextImpl.java:100)\norg.apache.livy.
> rsc.driver.AddJarJob.call(AddJarJob.java:39)\norg.apache.livy.rsc.driver.
> JobWrapper.call(JobWrapper.java:57)\norg.apache.livy.rsc.
> driver.JobWrapper.call(JobWrapper.java:34)\njava.
> util.concurrent.FutureTask.run(FutureTask.java:266)\njava.util.concurrent.
> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\
> njava.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)\njava.lang.Thread.run(Thread.java:745)"
>         at org.apache.livy.client.http.LivyConnection.sendRequest(
> LivyConnection.java:229)
>         at org.apache.livy.client.http.LivyConnection.post(
> LivyConnection.java:192)
>         at org.apache.livy.client.http.HttpClient$2.call(HttpClient.
> java:152)
>         at org.apache.livy.client.http.HttpClient$2.call(HttpClient.
> java:149)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>         at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
>         ... 1 more
> [WARNING] thread Thread[HttpClient-38,5,callers.SparkLivyRun] was
> interrupted but is still alive after waiting at least 14999msecs
> [WARNING] thread Thread[HttpClient-38,5,callers.SparkLivyRun] will linger
> despite being asked to die via interruption
> [WARNING] NOTE: 1 thread(s) did not finish despite being asked to  via
> interruption. This is not a problem with exec:java, it is a problem with
> the running code. Although not serious, it should be remedied.
> [WARNING] Couldn't destroy threadgroup org.codehaus.mojo.exec.
> ExecJavaMojo$IsolatedThreadGroup[name=callers.SparkLivyRun,maxpri=10]
> java.lang.IllegalThreadStateException
>         at java.lang.ThreadGroup.destroy(ThreadGroup.java:778)
>         at org.codehaus.mojo.exec.ExecJavaMojo.execute(
> ExecJavaMojo.java:321)
>         at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(
> DefaultBuildPluginManager.java:134)
>         at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:207)
>         at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:153)
>         at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:145)
>         at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.
> buildProject(LifecycleModuleBuilder.java:116)
>         at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.
> buildProject(LifecycleModuleBuilder.java:80)
>         at org.apache.maven.lifecycle.internal.builder.singlethreaded.
> SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
>         at org.apache.maven.lifecycle.internal.LifecycleStarter.
> execute(LifecycleStarter.java:128)
>         at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
>         at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
>         at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
>         at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
>         at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
>         at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.codehaus.plexus.classworlds.launcher.Launcher.
> launchEnhanced(Launcher.java:289)
>         at org.codehaus.plexus.classworlds.launcher.Launcher.
> launch(Launcher.java:229)
>         at org.codehaus.plexus.classworlds.launcher.Launcher.
> mainWithExitCode(Launcher.java:415)
>         at org.codehaus.plexus.classworlds.launcher.Launcher.
> main(Launcher.java:356)
>         at org.codehaus.classworlds.Launcher.main(Launcher.java:47)
>
>

Reply via email to