[jira] [Commented] (SPARK-5008) Persistent HDFS does not recognize EBS Volumes

2015-01-13 Thread Brad Willard (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275586#comment-14275586
 ] 

Brad Willard commented on SPARK-5008:
-

[~nchammas] I went ahead and created a cluster with this

./spark-ec2 -v 1.2.0 --wait 235 -k ... --copy-aws-credentials 
--hadoop-major-version 1 -z us-east-1c -s 2 -m c1.medium -t c1.medium launch 
spark-hdfs-bug --ebs-vol-size 10 --ebs-vol-type gp2 --ebs-vol-num 1

I updated the core-site.xml and switched /vol - to /vol0. ran copy-dir and 
restarted via stop-all.sh and start-all.sh.
That brings it up in a broken state. However if I then modify the core-site.xml 
back to /vol on master and restart, it works correctly.

So that's a partial solution. I assume this is because the master node doesn't 
get an ebs volume.

 Persistent HDFS does not recognize EBS Volumes
 --

 Key: SPARK-5008
 URL: https://issues.apache.org/jira/browse/SPARK-5008
 Project: Spark
  Issue Type: Bug
  Components: EC2
Affects Versions: 1.2.0
 Environment: 8 Node Cluster Generated from 1.2.0 spark-ec2 script.
 -m c3.2xlarge -t c3.8xlarge --ebs-vol-size 300 --ebs-vol-type gp2 
 --ebs-vol-num 1
Reporter: Brad Willard

 Cluster is built with correct size EBS volumes. It creates the volume at 
 /dev/xvds and it mounted to /vol0. However when you start persistent hdfs 
 with start-all script, it starts but it isn't correctly configured to use the 
 EBS volume.
 I'm assuming some sym links or expected mounts are not correctly configured.
 This has worked flawlessly on all previous versions of spark.
 I have a stupid workaround by installing pssh and mucking with it by mounting 
 it to /vol, which worked, however it doesn't not work between restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-5008) Persistent HDFS does not recognize EBS Volumes

2015-01-11 Thread Nicholas Chammas (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273007#comment-14273007
 ] 

Nicholas Chammas commented on SPARK-5008:
-

Use [{{copy-dir}}|https://github.com/mesos/spark-ec2/blob/v4/copy-dir.sh], 
which is installed by default, from the master.

 Persistent HDFS does not recognize EBS Volumes
 --

 Key: SPARK-5008
 URL: https://issues.apache.org/jira/browse/SPARK-5008
 Project: Spark
  Issue Type: Bug
  Components: EC2
Affects Versions: 1.2.0
 Environment: 8 Node Cluster Generated from 1.2.0 spark-ec2 script.
 -m c3.2xlarge -t c3.8xlarge --ebs-vol-size 300 --ebs-vol-type gp2 
 --ebs-vol-num 1
Reporter: Brad Willard

 Cluster is built with correct size EBS volumes. It creates the volume at 
 /dev/xvds and it mounted to /vol0. However when you start persistent hdfs 
 with start-all script, it starts but it isn't correctly configured to use the 
 EBS volume.
 I'm assuming some sym links or expected mounts are not correctly configured.
 This has worked flawlessly on all previous versions of spark.
 I have a stupid workaround by installing pssh and mucking with it by mounting 
 it to /vol, which worked, however it doesn't not work between restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-5008) Persistent HDFS does not recognize EBS Volumes

2015-01-11 Thread Brad Willard (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272991#comment-14272991
 ] 

Brad Willard commented on SPARK-5008:
-

[~nchammas] I can try that once I get back into the office. Probably by 
Wednesday. Once I update the core-site.xml, what's the correct way to sync it 
to all the slaves?

 Persistent HDFS does not recognize EBS Volumes
 --

 Key: SPARK-5008
 URL: https://issues.apache.org/jira/browse/SPARK-5008
 Project: Spark
  Issue Type: Bug
  Components: EC2
Affects Versions: 1.2.0
 Environment: 8 Node Cluster Generated from 1.2.0 spark-ec2 script.
 -m c3.2xlarge -t c3.8xlarge --ebs-vol-size 300 --ebs-vol-type gp2 
 --ebs-vol-num 1
Reporter: Brad Willard

 Cluster is built with correct size EBS volumes. It creates the volume at 
 /dev/xvds and it mounted to /vol0. However when you start persistent hdfs 
 with start-all script, it starts but it isn't correctly configured to use the 
 EBS volume.
 I'm assuming some sym links or expected mounts are not correctly configured.
 This has worked flawlessly on all previous versions of spark.
 I have a stupid workaround by installing pssh and mucking with it by mounting 
 it to /vol, which worked, however it doesn't not work between restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-5008) Persistent HDFS does not recognize EBS Volumes

2015-01-10 Thread Nicholas Chammas (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272585#comment-14272585
 ] 

Nicholas Chammas commented on SPARK-5008:
-

cc [~shivaram]

[~brdwrd] - What was the last version this worked at? 1.1.1?

 Persistent HDFS does not recognize EBS Volumes
 --

 Key: SPARK-5008
 URL: https://issues.apache.org/jira/browse/SPARK-5008
 Project: Spark
  Issue Type: Bug
  Components: EC2
Affects Versions: 1.2.0
 Environment: 8 Node Cluster Generated from 1.2.0 spark-ec2 script.
 -m c3.2xlarge -t c3.8xlarge --ebs-vol-size 300 --ebs-vol-type gp2 
 --ebs-vol-num 1
Reporter: Brad Willard

 Cluster is built with correct size EBS volumes. It creates the volume at 
 /dev/xvds and it mounted to /vol0. However when you start persistent hdfs 
 with start-all script, it starts but it isn't correctly configured to use the 
 EBS volume.
 I'm assuming some sym links or expected mounts are not correctly configured.
 This has worked flawlessly on all previous versions of spark.
 I have a stupid workaround by installing pssh and mucking with it by mounting 
 it to /vol, which worked, however it doesn't not work between restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-5008) Persistent HDFS does not recognize EBS Volumes

2015-01-10 Thread Brad Willard (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272593#comment-14272593
 ] 

Brad Willard commented on SPARK-5008:
-

Yes. 1.1.1 was fine.

—
Sent from Mailbox

On Sat, Jan 10, 2015 at 11:58 AM, Nicholas Chammas (JIRA) j...@apache.org



 Persistent HDFS does not recognize EBS Volumes
 --

 Key: SPARK-5008
 URL: https://issues.apache.org/jira/browse/SPARK-5008
 Project: Spark
  Issue Type: Bug
  Components: EC2
Affects Versions: 1.2.0
 Environment: 8 Node Cluster Generated from 1.2.0 spark-ec2 script.
 -m c3.2xlarge -t c3.8xlarge --ebs-vol-size 300 --ebs-vol-type gp2 
 --ebs-vol-num 1
Reporter: Brad Willard

 Cluster is built with correct size EBS volumes. It creates the volume at 
 /dev/xvds and it mounted to /vol0. However when you start persistent hdfs 
 with start-all script, it starts but it isn't correctly configured to use the 
 EBS volume.
 I'm assuming some sym links or expected mounts are not correctly configured.
 This has worked flawlessly on all previous versions of spark.
 I have a stupid workaround by installing pssh and mucking with it by mounting 
 it to /vol, which worked, however it doesn't not work between restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-5008) Persistent HDFS does not recognize EBS Volumes

2015-01-10 Thread Shivaram Venkataraman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272830#comment-14272830
 ] 

Shivaram Venkataraman commented on SPARK-5008:
--

Hmm I think https://github.com/mesos/spark-ec2/pull/66 probably broke this in 
some way. We made some tweaks to keep spark-ec2 backwards compatible by 
symlinking /vol3 to /vol -- However I think the new behavior is now broken as 
persistent-hdfs expects /vol to be exist and can't find it.

I think one fix might be to create a symlink from /vol0 to /vol if /vol3 
doesn't exist -- Or we could also change core-site.xml in persistent-hdfs to 
pick up all the volumes 

 Persistent HDFS does not recognize EBS Volumes
 --

 Key: SPARK-5008
 URL: https://issues.apache.org/jira/browse/SPARK-5008
 Project: Spark
  Issue Type: Bug
  Components: EC2
Affects Versions: 1.2.0
 Environment: 8 Node Cluster Generated from 1.2.0 spark-ec2 script.
 -m c3.2xlarge -t c3.8xlarge --ebs-vol-size 300 --ebs-vol-type gp2 
 --ebs-vol-num 1
Reporter: Brad Willard

 Cluster is built with correct size EBS volumes. It creates the volume at 
 /dev/xvds and it mounted to /vol0. However when you start persistent hdfs 
 with start-all script, it starts but it isn't correctly configured to use the 
 EBS volume.
 I'm assuming some sym links or expected mounts are not correctly configured.
 This has worked flawlessly on all previous versions of spark.
 I have a stupid workaround by installing pssh and mucking with it by mounting 
 it to /vol, which worked, however it doesn't not work between restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-5008) Persistent HDFS does not recognize EBS Volumes

2015-01-10 Thread Nicholas Chammas (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272846#comment-14272846
 ] 

Nicholas Chammas commented on SPARK-5008:
-

Though I'm not too familiar with this stuff yet, updating {{core-site.xml}} to 
cover all volumes seems like the more future-proof way to go.

[~brdwrd] - Are you able to come up with a solution to the problem by updating 
{{core-site.xml}}? We can just bake that into the default file that 
{{spark-ec2}} creates.

 Persistent HDFS does not recognize EBS Volumes
 --

 Key: SPARK-5008
 URL: https://issues.apache.org/jira/browse/SPARK-5008
 Project: Spark
  Issue Type: Bug
  Components: EC2
Affects Versions: 1.2.0
 Environment: 8 Node Cluster Generated from 1.2.0 spark-ec2 script.
 -m c3.2xlarge -t c3.8xlarge --ebs-vol-size 300 --ebs-vol-type gp2 
 --ebs-vol-num 1
Reporter: Brad Willard

 Cluster is built with correct size EBS volumes. It creates the volume at 
 /dev/xvds and it mounted to /vol0. However when you start persistent hdfs 
 with start-all script, it starts but it isn't correctly configured to use the 
 EBS volume.
 I'm assuming some sym links or expected mounts are not correctly configured.
 This has worked flawlessly on all previous versions of spark.
 I have a stupid workaround by installing pssh and mucking with it by mounting 
 it to /vol, which worked, however it doesn't not work between restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org