Creating AMIs from scratch is a complete pain in the ass. If you have a spare
week, sure. I understand why the team avoids it.
The easiest way is probably to spin up a working instance and then use Amazons
save as new AMI, but that has some major limitations, especially with
software not
Yeah, we badly need new AMIs that include at a minimum package/security
updates and Python 2.7. There is an open issue to track the 2.7 AMI update
https://issues.apache.org/jira/browse/SPARK-922, at least.
On Thu, Jun 12, 2014 at 3:34 PM, unorthodox.engine...@gmail.com wrote:
Creating AMIs
you can comment out this function and Create a new one which will return
your ami-id and the rest of the script will run fine.
def get_spark_ami(opts):
instance_types = {
m1.small:pvm,
m1.medium: pvm,
m1.large:pvm,
m1.xlarge: pvm,
t1.micro:pvm,
c1.medium:
Thanks for the response Akhil. My email may not have been clear, but my
question is about what should be inside the AMI image, not how to pass an
AMI id in to the spark_ec2 script.
Should certain packages be installed? Do certain directories need to exist?
etc...
On Fri, Jun 6, 2014 at 4:40
Hi Matt,
You will be needing the following on the AMI:
1. Java Installed
2. Root login enabled
3. /mnt should be available (Since all the storage goes here)
Rest of the things spark-ec2 script will set up for you. Let me know if you
need anymore clarification on this.
Thanks
Best Regards
Thanks Akhil! I'll give that a try!
How would I go about creating a new AMI image that I can use with the spark
ec2 commands? I can't seem to find any documentation. I'm looking for a
list of steps that I'd need to perform to make an Amazon Linux image ready
to be used by the spark ec2 tools.
I've been reading through the spark