[ https://issues.apache.org/jira/browse/SPARK-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272830#comment-14272830 ]
Shivaram Venkataraman commented on SPARK-5008: ---------------------------------------------- Hmm I think https://github.com/mesos/spark-ec2/pull/66 probably broke this in some way. We made some tweaks to keep spark-ec2 backwards compatible by symlinking /vol3 to /vol -- However I think the new behavior is now broken as persistent-hdfs expects /vol to be exist and can't find it. I think one fix might be to create a symlink from /vol0 to /vol if /vol3 doesn't exist -- Or we could also change core-site.xml in persistent-hdfs to pick up all the volumes > Persistent HDFS does not recognize EBS Volumes > ---------------------------------------------- > > Key: SPARK-5008 > URL: https://issues.apache.org/jira/browse/SPARK-5008 > Project: Spark > Issue Type: Bug > Components: EC2 > Affects Versions: 1.2.0 > Environment: 8 Node Cluster Generated from 1.2.0 spark-ec2 script. > -m c3.2xlarge -t c3.8xlarge --ebs-vol-size 300 --ebs-vol-type gp2 > --ebs-vol-num 1 > Reporter: Brad Willard > > Cluster is built with correct size EBS volumes. It creates the volume at > /dev/xvds and it mounted to /vol0. However when you start persistent hdfs > with start-all script, it starts but it isn't correctly configured to use the > EBS volume. > I'm assuming some sym links or expected mounts are not correctly configured. > This has worked flawlessly on all previous versions of spark. > I have a stupid workaround by installing pssh and mucking with it by mounting > it to /vol, which worked, however it doesn't not work between restarts. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org