[ 
https://issues.apache.org/jira/browse/SPARK-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14570794#comment-14570794
 ] 

Grzegorz Dubicki edited comment on SPARK-3185 at 6/4/15 10:54 AM:
------------------------------------------------------------------

(EDITED) Sorry for the mess I made here. I have been using old code by mistak 
and that was the cause of my problem.

Anyway for Spark 1.2.0 I used this patch 
https://github.com/grzegorz-dubicki/spark-ec2/commit/a4a5a1713bac479f36aeccc1bd01ca02616e63bb
 to use Tachyon 0.5.0 instead of 0.4.1 and my cluster is being created without 
the exception mentioned in the original issue description.

In Spark 1.3.1 this is already fixed by 
https://github.com/mesos/spark-ec2/pull/102.


was (Author: grzegorz-dubicki):
(EDITED) Sorry for the mess I made here. I have been using old code by mistak 
and that was the cause of my problem.

Anyway I used this patch 
https://github.com/grzegorz-dubicki/spark-ec2/commit/a4a5a1713bac479f36aeccc1bd01ca02616e63bb
 to use Tachyon 0.5.0 instead of 0.4.1 and my cluster is being created without 
the exception mentioned in the original issue description.

> SPARK launch on Hadoop 2 in EC2 throws Tachyon exception when Formatting 
> JOURNAL_FOLDER
> ---------------------------------------------------------------------------------------
>
>                 Key: SPARK-3185
>                 URL: https://issues.apache.org/jira/browse/SPARK-3185
>             Project: Spark
>          Issue Type: Bug
>          Components: EC2
>    Affects Versions: 1.0.2
>         Environment: Amazon Linux AMI
> [ec2-user@ip-172-30-1-145 ~]$ uname -a
> Linux ip-172-30-1-145 3.10.42-52.145.amzn1.x86_64 #1 SMP Tue Jun 10 23:46:43 
> UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
> https://aws.amazon.com/amazon-linux-ami/2014.03-release-notes/
> The build I used (and MD5 verified):
> [ec2-user@ip-172-30-1-145 ~]$ wget 
> http://supergsego.com/apache/spark/spark-1.0.2/spark-1.0.2-bin-hadoop2.tgz
>            Reporter: Jeremy Chambers
>
> {code}
> org.apache.hadoop.ipc.RemoteException: Server IPC version 7 cannot 
> communicate with client version 4
> {code}
> When I launch SPARK 1.0.2 on Hadoop 2 in a new EC2 cluster, the above tachyon 
> exception is thrown when "Formatting JOURNAL_FOLDER".
> No exception occurs when I launch on Hadoop 1.
> Launch used:
> {code}
> ./spark-ec2 -k spark_cluster -i /home/ec2-user/kagi/spark_cluster.ppk 
> --zone=us-east-1a --hadoop-major-version=2 --spot-price=0.0165 -s 3 launch 
> sparkProd
> {code}
> {code}
> ----log snippet----
> Formatting Tachyon Master @ ec2-54-80-49-244.compute-1.amazonaws.com
> Formatting JOURNAL_FOLDER: /root/tachyon/libexec/../journal/
> Exception in thread "main" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException: Server IPC version 7 cannot 
> communicate with client version 4
>         at tachyon.util.CommonUtils.runtimeException(CommonUtils.java:246)
>         at tachyon.UnderFileSystemHdfs.<init>(UnderFileSystemHdfs.java:73)
>         at tachyon.UnderFileSystemHdfs.getClient(UnderFileSystemHdfs.java:53)
>         at tachyon.UnderFileSystem.get(UnderFileSystem.java:53)
>         at tachyon.Format.main(Format.java:54)
> Caused by: org.apache.hadoop.ipc.RemoteException: Server IPC version 7 cannot 
> communicate with client version 4
>         at org.apache.hadoop.ipc.Client.call(Client.java:1070)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>         at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>         at 
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>         at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>         at tachyon.UnderFileSystemHdfs.<init>(UnderFileSystemHdfs.java:69)
>         ... 3 more
> Killed 0 processes
> Killed 0 processes
> ec2-54-167-219-159.compute-1.amazonaws.com: Killed 0 processes
> ec2-54-198-198-17.compute-1.amazonaws.com: Killed 0 processes
> ec2-54-166-36-0.compute-1.amazonaws.com: Killed 0 processes
> ---end snippet---
> {code}
> *I don't have this problem when I launch without the 
> "--hadoop-major-version=2" (which defaults to Hadoop 1.x).*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to