Yes that is the spark interpreter and it will be “always on”.

Mohit
www.dataorchardllc.com

> On Aug 8, 2016, at 10:33 AM, Brian Liao <brianbl...@gmail.com> wrote:
> 
> Hi 
> 
> Thank you for the help. This is really great.
> 
> I have Zeppelin running now with a remote standalone Spark cluster.
> 
> To summarize, you will need to install hadoop client, spark clients onto the 
> same server you are running zeppelin. You will also need to ensure that you 
> configure the .xml properly in /etc/hadoop/conf, /usr/lib/spark, 
> /etc/hive/conf, each of these need to be configure to point to the right 
> cluster you are using.
> 
> I am able to run the examples that came with the binary.
> 
> However I want to check to see if this a proper/normal behavior. I noticed 
> that Zeppelin fires a job on spark and it never ends. See screenshot below:
> 
> <PastedGraphic-1.png>
> 
> It seems the only way to get rid of it is to restart Zeppelin.
> 
> Thanks again for all the help.
> 
>> On Aug 4, 2016, at 8:46 PM, Jongyoul Lee <jongy...@gmail.com 
>> <mailto:jongy...@gmail.com>> wrote:
>> 
>> If you have spark binary in your local, you can enforce the version by 
>> setting SPARK_HOME in your zeppelin-env.sh
>> 
>> On Fri, Aug 5, 2016 at 9:48 AM, Mohit Jaggi <mohitja...@gmail.com 
>> <mailto:mohitja...@gmail.com>> wrote:
>> I defer to the experts but that is what i do and it works. Weird rpc  errors 
>> on spark are often due to version mismatch. I wish there was a simple 
>> version check but afaik there isnt
>> 
>> Sent from my iPhone
>> 
>> On Aug 4, 2016, at 11:51 AM, Brian Liao <brianbl...@gmail.com 
>> <mailto:brianbl...@gmail.com>> wrote:
>> 
>>> So you are saying that I need to install spark locally in order to get this 
>>> to work?
>>> 
>>>> On Aug 4, 2016, at 11:12 AM, Mohit Jaggi <mohitja...@gmail.com 
>>>> <mailto:mohitja...@gmail.com>> wrote:
>>>> 
>>>> One of the Z config variables points to your local spark installation. 
>>>> Make sure it is the same version as the one on the cluster.
>>>> 
>>>>> On Aug 4, 2016, at 10:49 AM, Brian Liao <brianbl...@gmail.com 
>>>>> <mailto:brianbl...@gmail.com>> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> Thank you.
>>>>> 
>>>>> I used the prebuilt binary package version 0.6.0 from Zeppelin and my 
>>>>> spark cluster is 1.5.
>>>>> 
>>>>> Is there a way to force the prebuilt zeppelin to use 1.5? or the only way 
>>>>> to do this is to compile from source?
>>>>> 
>>>>> Also I don’t need to install Spark locally on where I host Zeppelin?
>>>>> 
>>>>> 
>>>>>> On Aug 4, 2016, at 10:25 AM, Mohit Jaggi <mohitja...@gmail.com 
>>>>>> <mailto:mohitja...@gmail.com>> wrote:
>>>>>> 
>>>>>> mismatched spark version?
>>>>>> 
>>>>>> 
>>>>>>> On Aug 4, 2016, at 8:11 AM, Brian Liao <brianbl...@gmail.com 
>>>>>>> <mailto:brianbl...@gmail.com>> wrote:
>>>>>>> 
>>>>>>> Hi 
>>>>>>> 
>>>>>>> I am following this guide (http://zeppelin.apache.org/download.html 
>>>>>>> <http://zeppelin.apache.org/download.html>) to install Zeppelin, but 
>>>>>>> wasn't able to configure it and get it to work. I used the binary 
>>>>>>> package (the one that I don't need to compile anymore).
>>>>>>> 
>>>>>>> I would like to have Zeppelin setup as a separate server but really 
>>>>>>> have no idea what the requirement of this would be. 
>>>>>>> 
>>>>>>> I have a Spark (1.5) standalone cluster setup.
>>>>>>> 
>>>>>>> The error I get from my Zeppelin interpreter log when trying to run a 
>>>>>>> simple %md command is the following: 
>>>>>>> 
>>>>>>>  INFO [2016-08-04 00:24:51,645] ({Thread-0} 
>>>>>>> RemoteInterpreterServer.java[run]:81) - Starting remote interpreter 
>>>>>>> server on port 40408
>>>>>>>  INFO [2016-08-04 00:24:51,980] ({pool-1-thread-2} 
>>>>>>> RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate 
>>>>>>> interpreter org.apache.zeppelin.spark.SparkInterpreter
>>>>>>>  INFO [2016-08-04 00:24:52,019] ({pool-1-thread-2} 
>>>>>>> RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate 
>>>>>>> interpreter org.apache.zeppelin.spark.PySparkInterpreter
>>>>>>>  INFO [2016-08-04 00:24:52,023] ({pool-1-thread-2} 
>>>>>>> RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate 
>>>>>>> interpreter org.apache.zeppelin.spark.SparkRInterpreter
>>>>>>>  INFO [2016-08-04 00:24:52,024] ({pool-1-thread-2} 
>>>>>>> RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate 
>>>>>>> interpreter org.apache.zeppelin.spark.SparkSqlInterpreter
>>>>>>>  INFO [2016-08-04 00:24:52,027] ({pool-1-thread-2} 
>>>>>>> RemoteInterpreterServer.java[createInterpreter]:169) - Instantiate 
>>>>>>> interpreter org.apache.zeppelin.spark.DepInterpreter
>>>>>>>  INFO [2016-08-04 00:24:52,056] ({pool-2-thread-2} 
>>>>>>> SchedulerFactory.java[jobStarted]:131) - Job 
>>>>>>> remoteInterpretJob_1470270292054 started by scheduler 
>>>>>>> org.apache.zeppelin.spark.SparkInterpreter1041596993
>>>>>>>  WARN [2016-08-04 00:24:52,897] ({pool-2-thread-2} 
>>>>>>> NativeCodeLoader.java[<clinit>]:62) - Unable to load native-hadoop 
>>>>>>> library for your platform... using builtin-java classes where applicable
>>>>>>>  INFO [2016-08-04 00:24:53,046] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Changing view acls to: root
>>>>>>>  INFO [2016-08-04 00:24:53,047] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Changing modify acls to: root
>>>>>>>  INFO [2016-08-04 00:24:53,047] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - SecurityManager: authentication disabled; 
>>>>>>> ui acls disabled; users with view permissions: Set(root); users with 
>>>>>>> modify permissions: Set(root)
>>>>>>>  INFO [2016-08-04 00:24:53,279] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Starting HTTP Server
>>>>>>>  INFO [2016-08-04 00:24:53,316] ({pool-2-thread-2} 
>>>>>>> Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
>>>>>>>  INFO [2016-08-04 00:24:53,329] ({pool-2-thread-2} 
>>>>>>> AbstractConnector.java[doStart]:338) - Started 
>>>>>>> SocketConnector@0.0.0.0:42231 <http://SocketConnector@0.0.0.0:42231/>
>>>>>>>  INFO [2016-08-04 00:24:53,330] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Successfully started service 'HTTP class 
>>>>>>> server' on port 42231.
>>>>>>>  INFO [2016-08-04 00:24:55,298] ({pool-2-thread-2} 
>>>>>>> SparkInterpreter.java[createSparkContext]:233) - ------ Create new 
>>>>>>> SparkContext spark://10.1.4.190:7077 <http://10.1.4.190:7077/> -------
>>>>>>>  INFO [2016-08-04 00:24:55,313] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Running Spark version 1.6.1
>>>>>>>  WARN [2016-08-04 00:24:55,326] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logWarning]:70) - 
>>>>>>> SPARK_CLASSPATH was detected (set to 
>>>>>>> ':/home/ubuntu/zeppelin/interpreter/spark/dep/*:/home/ubuntu/zeppelin/interpreter/spark/*::/home/ubuntu/zeppelin/conf:/home/ubuntu/zeppelin/conf:/home/ubuntu/zeppelin/lib/zeppelin-interpreter-0.6.0.jar:/etc/hadoop/conf/').
>>>>>>> This is deprecated in Spark 1.0+.
>>>>>>> 
>>>>>>> Please instead use:
>>>>>>>  - ./spark-submit with --driver-class-path to augment the driver 
>>>>>>> classpath
>>>>>>>  - spark.executor.extraClassPath to augment the executor classpath
>>>>>>>         
>>>>>>>  WARN [2016-08-04 00:24:55,327] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logWarning]:70) - Setting 'spark.executor.extraClassPath' 
>>>>>>> to 
>>>>>>> ':/home/ubuntu/zeppelin/interpreter/spark/dep/*:/home/ubuntu/zeppelin/interpreter/spark/*::/home/ubuntu/zeppelin/conf:/home/ubuntu/zeppelin/conf:/home/ubuntu/zeppelin/lib/zeppelin-interpreter-0.6.0.jar:/etc/hadoop/conf/'
>>>>>>>  as a work-around.
>>>>>>>  WARN [2016-08-04 00:24:55,327] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logWarning]:70) - Setting 'spark.driver.extraClassPath' 
>>>>>>> to 
>>>>>>> ':/home/ubuntu/zeppelin/interpreter/spark/dep/*:/home/ubuntu/zeppelin/interpreter/spark/*::/home/ubuntu/zeppelin/conf:/home/ubuntu/zeppelin/conf:/home/ubuntu/zeppelin/lib/zeppelin-interpreter-0.6.0.jar:/etc/hadoop/conf/'
>>>>>>>  as a work-around.
>>>>>>>  INFO [2016-08-04 00:24:55,338] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Changing view acls to: root
>>>>>>>  INFO [2016-08-04 00:24:55,339] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Changing modify acls to: root
>>>>>>>  INFO [2016-08-04 00:24:55,339] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - SecurityManager: authentication disabled; 
>>>>>>> ui acls disabled; users with view permissions: Set(root); users with 
>>>>>>> modify permissions: Set(root)
>>>>>>>  INFO [2016-08-04 00:24:55,483] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Successfully started service 'sparkDriver' 
>>>>>>> on port 56365.
>>>>>>>  INFO [2016-08-04 00:24:55,723] 
>>>>>>> ({sparkDriverActorSystem-akka.actor.default-dispatcher-4} 
>>>>>>> Slf4jLogger.scala[applyOrElse]:80) - Slf4jLogger started
>>>>>>>  INFO [2016-08-04 00:24:55,749] 
>>>>>>> ({sparkDriverActorSystem-akka.actor.default-dispatcher-4} 
>>>>>>> Slf4jLogger.scala[apply$mcV$sp]:74) - Starting remoting
>>>>>>>  INFO [2016-08-04 00:24:55,869] 
>>>>>>> ({sparkDriverActorSystem-akka.actor.default-dispatcher-4} 
>>>>>>> Slf4jLogger.scala[apply$mcV$sp]:74) - Remoting started; listening on 
>>>>>>> addresses :[akka.tcp://sparkDriverActorSystem@10.1.4.253:57523 
>>>>>>> <http://sparkDriverActorSystem@10.1.4.253:57523/>]
>>>>>>>  INFO [2016-08-04 00:24:55,869] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Successfully started service 
>>>>>>> 'sparkDriverActorSystem' on port 57523.
>>>>>>>  INFO [2016-08-04 00:24:55,878] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Registering MapOutputTracker
>>>>>>>  INFO [2016-08-04 00:24:55,894] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Registering BlockManagerMaster
>>>>>>>  INFO [2016-08-04 00:24:55,904] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Created local directory at 
>>>>>>> /tmp/blockmgr-947bf6b6-5c70-4d29-b4b0-975692e0c08d
>>>>>>>  INFO [2016-08-04 00:24:55,908] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - MemoryStore started with capacity 511.1 MB
>>>>>>>  INFO [2016-08-04 00:24:55,981] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Registering OutputCommitCoordinator
>>>>>>>  INFO [2016-08-04 00:24:56,081] ({pool-2-thread-2} 
>>>>>>> Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
>>>>>>>  INFO [2016-08-04 00:24:56,100] ({pool-2-thread-2} 
>>>>>>> AbstractConnector.java[doStart]:338) - Started 
>>>>>>> SelectChannelConnector@0.0.0.0:4040 
>>>>>>> <http://SelectChannelConnector@0.0.0.0:4040/>
>>>>>>>  INFO [2016-08-04 00:24:56,100] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Successfully started service 'SparkUI' on 
>>>>>>> port 4040.
>>>>>>>  INFO [2016-08-04 00:24:56,103] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Started SparkUI at http://10.1.4.253:4040 
>>>>>>> <http://10.1.4.253:4040/>
>>>>>>>  INFO [2016-08-04 00:24:56,215] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - HTTP File server directory is 
>>>>>>> /tmp/spark-96766055-2740-40cc-b077-475152c38b03/httpd-e1f81450-2a99-413e-af3f-b21fb5ece333
>>>>>>>  INFO [2016-08-04 00:24:56,215] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Starting HTTP Server
>>>>>>>  INFO [2016-08-04 00:24:56,216] ({pool-2-thread-2} 
>>>>>>> Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
>>>>>>>  INFO [2016-08-04 00:24:56,218] ({pool-2-thread-2} 
>>>>>>> AbstractConnector.java[doStart]:338) - Started 
>>>>>>> SocketConnector@0.0.0.0:42734 <http://SocketConnector@0.0.0.0:42734/>
>>>>>>>  INFO [2016-08-04 00:24:56,218] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Successfully started service 'HTTP file 
>>>>>>> server' on port 42734.
>>>>>>>  INFO [2016-08-04 00:24:56,233] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Copying 
>>>>>>> /home/ubuntu/zeppelin/interpreter/spark/pyspark/pyspark.zip to 
>>>>>>> /tmp/spark-96766055-2740-40cc-b077-475152c38b03/userFiles-9f602668-e282-4214-be77-c68e36e9e110/pyspark.zip
>>>>>>>  INFO [2016-08-04 00:24:56,242] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Added file 
>>>>>>> file:/home/ubuntu/zeppelin/interpreter/spark/pyspark/pyspark.zip at 
>>>>>>> http://10.1.4.253:42734/files/pyspark.zip 
>>>>>>> <http://10.1.4.253:42734/files/pyspark.zip> with timestamp 1470270296233
>>>>>>>  INFO [2016-08-04 00:24:56,243] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Copying 
>>>>>>> /home/ubuntu/zeppelin/interpreter/spark/pyspark/py4j-0.9-src.zip to 
>>>>>>> /tmp/spark-96766055-2740-40cc-b077-475152c38b03/userFiles-9f602668-e282-4214-be77-c68e36e9e110/py4j-0.9-src.zip
>>>>>>>  INFO [2016-08-04 00:24:56,251] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Added file 
>>>>>>> file:/home/ubuntu/zeppelin/interpreter/spark/pyspark/py4j-0.9-src.zip 
>>>>>>> at http://10.1.4.253:42734/files/py4j-0.9-src.zip 
>>>>>>> <http://10.1.4.253:42734/files/py4j-0.9-src.zip> with timestamp 
>>>>>>> 1470270296243
>>>>>>>  INFO [2016-08-04 00:24:56,285] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logInfo]:58) - Created default pool default, 
>>>>>>> schedulingMode: FIFO, minShare: 0, weight: 1
>>>>>>>  INFO [2016-08-04 00:24:56,330] 
>>>>>>> ({appclient-register-master-threadpool-0} Logging.scala[logInfo]:58) - 
>>>>>>> Connecting to master spark://10.1.4.190:7077 <>...
>>>>>>>  INFO [2016-08-04 00:25:16,331] 
>>>>>>> ({appclient-register-master-threadpool-0} Logging.scala[logInfo]:58) - 
>>>>>>> Connecting to master spark://10.1.4.190:7077 <>...
>>>>>>> ERROR [2016-08-04 00:25:16,341] ({shuffle-client-0} 
>>>>>>> TransportResponseHandler.java[channelUnregistered]:122) - Still have 2 
>>>>>>> requests outstanding when connection from ip-10-1-4-190/10.1.4.190:7077 
>>>>>>> <http://10.1.4.190:7077/> is closed
>>>>>>>  WARN [2016-08-04 00:25:16,343] 
>>>>>>> ({appclient-register-master-threadpool-0} Logging.scala[logWarning]:91) 
>>>>>>> - Failed to connect to master 10.1.4.190:7077 <http://10.1.4.190:7077/>
>>>>>>> java.io.IOException: Connection from ip-10-1-4-190/10.1.4.190:7077 
>>>>>>> <http://10.1.4.190:7077/> closed
>>>>>>>         at 
>>>>>>> org.apache.spark.network.client.TransportResponseHandler.channelUnregistered(TransportResponseHandler.java:124)
>>>>>>>         at 
>>>>>>> org.apache.spark.network.server.TransportChannelHandler.channelUnregistered(TransportChannelHandler.java:94)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144)
>>>>>>>         at 
>>>>>>> io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144)
>>>>>>>         at 
>>>>>>> io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144)
>>>>>>>         at 
>>>>>>> io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144)
>>>>>>>         at 
>>>>>>> io.netty.channel.DefaultChannelPipeline.fireChannelUnregistered(DefaultChannelPipeline.java:739)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:659)
>>>>>>>         at 
>>>>>>> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
>>>>>>>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>>>>>>>         at 
>>>>>>> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>>>>>>>         at java.lang.Thread.run(Thread.java:745)
>>>>>>>  INFO [2016-08-04 00:25:36,330] 
>>>>>>> ({appclient-register-master-threadpool-0} Logging.scala[logInfo]:58) - 
>>>>>>> Connecting to master spark://10.1.4.190:7077 <>...
>>>>>>>  INFO [2016-08-04 00:25:36,331] 
>>>>>>> ({appclient-register-master-threadpool-0} Logging.scala[logInfo]:58) - 
>>>>>>> Connecting to master spark://10.1.4.190:7077 <>...
>>>>>>> ERROR [2016-08-04 00:25:36,332] ({shuffle-client-0} 
>>>>>>> TransportClient.java[operationComplete]:235) - Failed to send RPC 
>>>>>>> 5850386894071965768 to ip-10-1-4-190/10.1.4.190:7077 
>>>>>>> <http://10.1.4.190:7077/>: java.nio.channels.ClosedChannelException
>>>>>>> java.nio.channels.ClosedChannelException
>>>>>>> ERROR [2016-08-04 00:25:36,333] ({shuffle-client-0} 
>>>>>>> TransportClient.java[operationComplete]:235) - Failed to send RPC 
>>>>>>> 4955841159714871653 to ip-10-1-4-190/10.1.4.190:7077 
>>>>>>> <http://10.1.4.190:7077/>: java.nio.channels.ClosedChannelException
>>>>>>> java.nio.channels.ClosedChannelException
>>>>>>>  WARN [2016-08-04 00:25:36,334] 
>>>>>>> ({appclient-register-master-threadpool-0} Logging.scala[logWarning]:91) 
>>>>>>> - Failed to connect to master 10.1.4.190:7077 <http://10.1.4.190:7077/>
>>>>>>> java.io.IOException: Failed to send RPC 4955841159714871653 to 
>>>>>>> ip-10-1-4-190/10.1.4.190:7077 <http://10.1.4.190:7077/>: 
>>>>>>> java.nio.channels.ClosedChannelException
>>>>>>>         at 
>>>>>>> org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:239)
>>>>>>>         at 
>>>>>>> org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:226)
>>>>>>>         at 
>>>>>>> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
>>>>>>>         at 
>>>>>>> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:567)
>>>>>>>         at 
>>>>>>> io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:801)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:699)
>>>>>>>         at 
>>>>>>> io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1122)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:908)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:960)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:893)
>>>>>>>         at 
>>>>>>> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
>>>>>>>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>>>>>>>         at 
>>>>>>> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>>>>>>>         at java.lang.Thread.run(Thread.java:745)
>>>>>>> Caused by: java.nio.channels.ClosedChannelException
>>>>>>>  INFO [2016-08-04 00:25:56,330] 
>>>>>>> ({appclient-register-master-threadpool-0} Logging.scala[logInfo]:58) - 
>>>>>>> Connecting to master spark://10.1.4.190:7077 <>...
>>>>>>>  INFO [2016-08-04 00:25:56,331] 
>>>>>>> ({appclient-register-master-threadpool-0} Logging.scala[logInfo]:58) - 
>>>>>>> Connecting to master spark://10.1.4.190:7077 <>...
>>>>>>> ERROR [2016-08-04 00:25:56,332] ({appclient-registration-retry-thread} 
>>>>>>> Logging.scala[logError]:74) - Application has been killed. Reason: All 
>>>>>>> masters are unresponsive! Giving up.
>>>>>>> ERROR [2016-08-04 00:25:56,334] ({shuffle-client-0} 
>>>>>>> TransportClient.java[operationComplete]:235) - Failed to send RPC 
>>>>>>> 6244878284399143650 to ip-10-1-4-190/10.1.4.190:7077 
>>>>>>> <http://10.1.4.190:7077/>: java.nio.channels.ClosedChannelException
>>>>>>> java.nio.channels.ClosedChannelException
>>>>>>>  WARN [2016-08-04 00:25:56,335] ({pool-2-thread-2} 
>>>>>>> Logging.scala[logWarning]:70) - Application ID is not initialized yet.
>>>>>>> ERROR [2016-08-04 00:25:56,339] ({shuffle-client-0} 
>>>>>>> TransportClient.java[operationComplete]:235) - Failed to send RPC 
>>>>>>> 4693556837279618360 to ip-10-1-4-190/10.1.4.190:7077 
>>>>>>> <http://10.1.4.190:7077/>: java.nio.channels.ClosedChannelException
>>>>>>> java.nio.channels.ClosedChannelException
>>>>>>>  WARN [2016-08-04 00:25:56,340] 
>>>>>>> ({appclient-register-master-threadpool-0} Logging.scala[logWarning]:91) 
>>>>>>> - Failed to connect to master 10.1.4.190:7077 <http://10.1.4.190:7077/>
>>>>>>> java.io.IOException: Failed to send RPC 4693556837279618360 to 
>>>>>>> ip-10-1-4-190/10.1.4.190:7077 <http://10.1.4.190:7077/>: 
>>>>>>> java.nio.channels.ClosedChannelException
>>>>>>>         at 
>>>>>>> org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:239)
>>>>>>>         at 
>>>>>>> org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:226)
>>>>>>>         at 
>>>>>>> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
>>>>>>>         at 
>>>>>>> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:567)
>>>>>>>         at 
>>>>>>> io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:801)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:699)
>>>>>>>         at 
>>>>>>> io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1122)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:908)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:960)
>>>>>>>         at 
>>>>>>> io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:893)
>>>>>>>         at 
>>>>>>> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
>>>>>>>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>>>>>>>         at 
>>>>>>> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>>>>>>>         at java.lang.Thread.run(Thread.java:745)
>>>>>>> Caused by: java.nio.channels.ClosedChannelException
>>>>>>>  
>>>>>>> 
>>>>>>> The error log from my spark master is:
>>>>>>> 
>>>>>>> 16/08/04 00:25:15 ERROR actor.OneForOneStrategy: Error while decoding 
>>>>>>> incoming Akka PDU of length: 1305
>>>>>>> akka.remote.transport.AkkaProtocolException: Error while decoding 
>>>>>>> incoming Akka PDU of length: 1305
>>>>>>> Caused by: akka.remote.transport.PduCodecException: Decoding PDU failed.
>>>>>>>         at 
>>>>>>> akka.remote.transport.AkkaPduProtobufCodec$.decodePdu(AkkaPduCodec.scala:167)
>>>>>>>         at 
>>>>>>> akka.remote.transport.ProtocolStateActor.akka$remote$transport$ProtocolStateActor$$decodePdu(AkkaProtocolTransport.scala:513)
>>>>>>>         at 
>>>>>>> akka.remote.transport.ProtocolStateActor$$anonfun$4.applyOrElse(AkkaProtocolTransport.scala:320)
>>>>>>>         at 
>>>>>>> akka.remote.transport.ProtocolStateActor$$anonfun$4.applyOrElse(AkkaProtocolTransport.scala:292)
>>>>>>>         at 
>>>>>>> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
>>>>>>>         at akka.actor.FSM$class.processEvent(FSM.scala:595)
>>>>>>>         at 
>>>>>>> akka.remote.transport.ProtocolStateActor.processEvent(AkkaProtocolTransport.scala:220)
>>>>>>>         at 
>>>>>>> akka.actor.FSM$class.akka$actor$FSM$$processMsg(FSM.scala:589)
>>>>>>>         at akka.actor.FSM$$anonfun$receive$1.applyOrElse(FSM.scala:583)
>>>>>>>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
>>>>>>>         at akka.actor.ActorCell.invoke(ActorCell.scala:456)
>>>>>>>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
>>>>>>>         at akka.dispatch.Mailbox.run(Mailbox.scala:219)
>>>>>>>         at 
>>>>>>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
>>>>>>>         at 
>>>>>>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>>>>>>         at 
>>>>>>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>>>>>>         at 
>>>>>>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>>>>>>         at 
>>>>>>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>>>>>> Caused by: com.google.protobuf_spark.InvalidProtocolBufferException: 
>>>>>>> Protocol message contained an invalid tag (zero).
>>>>>>>         at 
>>>>>>> com.google.protobuf_spark.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:68)
>>>>>>>         at 
>>>>>>> com.google.protobuf_spark.CodedInputStream.readTag(CodedInputStream.java:108)
>>>>>>>         at 
>>>>>>> akka.remote.WireFormats$AkkaProtocolMessage$Builder.mergeFrom(WireFormats.java:5410)
>>>>>>>         at 
>>>>>>> akka.remote.WireFormats$AkkaProtocolMessage$Builder.mergeFrom(WireFormats.java:5275)
>>>>>>>         at 
>>>>>>> com.google.protobuf_spark.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:300)
>>>>>>>         at 
>>>>>>> com.google.protobuf_spark.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:238)
>>>>>>>         at 
>>>>>>> com.google.protobuf_spark.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:162)
>>>>>>>         at 
>>>>>>> com.google.protobuf_spark.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:716)
>>>>>>>         at 
>>>>>>> com.google.protobuf_spark.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:238)
>>>>>>>         at 
>>>>>>> com.google.protobuf_spark.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:153)
>>>>>>>         at 
>>>>>>> com.google.protobuf_spark.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:709)
>>>>>>>         at 
>>>>>>> akka.remote.WireFormats$AkkaProtocolMessage.parseFrom(WireFormats.java:5209)
>>>>>>>         at 
>>>>>>> akka.remote.transport.AkkaPduProtobufCodec$.decodePdu(AkkaPduCodec.scala:168)
>>>>>>>         ... 17 more
>>>>>>> 
>>>>>>> Regards,
>>>>>>> 
>>>>>>> Brian
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> 
>> 
>> -- 
>> 이종열, Jongyoul Lee, 李宗烈
>> http://madeng.net <http://madeng.net/>
> 

Reply via email to