Hey Charles,

How can I try this new scripts as its not yet "shipped" ?

Thanks,
Dharmesh


On Fri, Jun 21, 2013 at 6:18 AM, Charles Reiss <[email protected]> wrote:

>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11914/
> -----------------------------------------------------------
>
> (Updated June 21, 2013, 12:48 a.m.)
>
>
> Review request for mesos.
>
>
> Changes
> -------
>
> Updated diffs; some fixes that came up during testing, and remember to git
> add the setup AMI script + README file for it.
>
> Unfortunately, it seems that this version of reviewboard has some problems
> with some renames in the diff.
>
>
> Description
> -------
>
> This is a rather big change that updates the EC2 scripts to deal with
> changes in Mesos over the past 18 months (or so) and to be usable under the
> assumption that no one will be maintaining an "official" Mesos AMI.
>
> The big changes are:
> - users are expected to supply their own AMI, there is no default;
> - the webui is firewalled away as exposing the libprocess port is
> dangerous (AFAIK), instead users are instructed to use the SSH-forwarding
> SOCKS proxy;
> - instructions and a script for building an AMI (which should work on many
> Debian- or Fedora-like Linux distros). The resulting AMI should be similar
> to the one that was originally supplied, including installations of Hadoop
> (setup to work with Mesos) and Spark;
> - use the deploy scripts to launch mesos;
> - drop support for downloading Mesos from git and rebuilding and
> redeploying it;
> - drop support for Torque/Hypertable/haproxy in the AMI;
>
>
> This addresses bug MESOS-500.
>     https://issues.apache.org/jira/browse/MESOS-500
>
>
> Diffs (updated)
> -----
>
>   ec2/Makefile.am 8c64f485888df1599697eb181fc76aa83206da07
>   ec2/README.md PRE-CREATION
>   ec2/deploy.amazon64/root/ephemeral-hdfs/conf/core-site.xml
>   ec2/deploy.amazon64/root/ephemeral-hdfs/conf/hadoop-env.sh
> 4e1e6991591e09f8860ab130948b0e787fce2b42
>   ec2/deploy.amazon64/root/ephemeral-hdfs/conf/hdfs-site.xml
> 43e68aa3e2ecbb53bd3b0a99d33b11aadda2cca7
>   ec2/deploy.amazon64/root/ephemeral-hdfs/conf/mapred-site.xml
>   ec2/deploy.amazon64/root/ephemeral-hdfs/conf/masters
>   ec2/deploy.amazon64/root/ephemeral-hdfs/conf/slaves
>   ec2/deploy.amazon64/root/mesos-ec2/cluster-url
>   ec2/deploy.amazon64/root/mesos-ec2/copy-dir
>   ec2/deploy.amazon64/root/mesos-ec2/create-swap
>   ec2/deploy.amazon64/root/mesos-ec2/hadoop-framework-conf/core-site.xml
>   ec2/deploy.amazon64/root/mesos-ec2/hadoop-framework-conf/hadoop-env.sh
> d8483140546fe00d6d17494c14ef4b09ae368496
>   ec2/deploy.amazon64/root/mesos-ec2/hadoop-framework-conf/mapred-site.xml
> 0ffa92f115d66adc9e370f4030850216545ade38
>
> ec2/deploy.amazon64/root/mesos-ec2/haproxy+apache/haproxy.config.template
> 957c3f6a6b2a2f658e076337d88e69447b2a3341
>   ec2/deploy.amazon64/root/mesos-ec2/hypertable/Capfile
> 8b50912745977cb71232ba1dfa77f8bb0d60191e
>   ec2/deploy.amazon64/root/mesos-ec2/hypertable/hypertable.cfg
> b4d5b7475fab3d4a842e4b0f459abc5ca316996a
>   ec2/deploy.amazon64/root/mesos-ec2/masters
>   ec2/deploy.amazon64/root/mesos-ec2/mesos-daemon
> bed27657f8718eecafb83cca5d29e0612a87f129
>   ec2/deploy.amazon64/root/mesos-ec2/redeploy-mesos
> 941d783d82f0708a6da0f4677c3364537dfded63
>   ec2/deploy.amazon64/root/mesos-ec2/setup
> b6b736091d4d5be431c8da29cdb98360a1df2d29
>   ec2/deploy.amazon64/root/mesos-ec2/setup-slave
> 436f417bc5a746ad74cc88c27e630a91d55b0b23
>   ec2/deploy.amazon64/root/mesos-ec2/setup-torque
> 2ac8fd3546063d3ba391147383de53b7824c7c8c
>   ec2/deploy.amazon64/root/mesos-ec2/slaves
>   ec2/deploy.amazon64/root/mesos-ec2/ssh-no-keychecking
>   ec2/deploy.amazon64/root/mesos-ec2/start-hypertable
> af16c2d7bd615cb0c98f6ba65ff5c69859678850
>   ec2/deploy.amazon64/root/mesos-ec2/start-mesos
> 0f551db396fa7ffebef880dca0232c22808ff7cc
>   ec2/deploy.amazon64/root/mesos-ec2/stop-hypertable
> 7280dc11bfc53ae84b7ecaba34c84810461ed7f4
>   ec2/deploy.amazon64/root/mesos-ec2/stop-mesos
> 9fdb8753dffc5115f94582753e0860538be6232b
>   ec2/deploy.amazon64/root/mesos-ec2/zoo
>   ec2/deploy.amazon64/root/persistent-hdfs/conf/core-site.xml
>   ec2/deploy.amazon64/root/persistent-hdfs/conf/hadoop-env.sh
> b38ba01817e3b9c9715a476ecb692bac68983f50
>   ec2/deploy.amazon64/root/persistent-hdfs/conf/hdfs-site.xml
> ec000cb2f306232a29cf701b10f230685d2662c9
>   ec2/deploy.amazon64/root/persistent-hdfs/conf/mapred-site.xml
>   ec2/deploy.amazon64/root/persistent-hdfs/conf/masters
>   ec2/deploy.amazon64/root/persistent-hdfs/conf/slaves
>   ec2/deploy.amazon64/root/spark/conf/spark-env.sh
> 6b331ff2257d1b88c8ebe2e8095887a3e2cf0d54
>   ec2/deploy.generic/root/mesos-ec2/ec2-variables.sh
> 1f76e61b2feffa4eadc53745caa5033dde182531
>   ec2/deploy.generic/root/mesos-ec2/mesos-master-env.sh PRE-CREATION
>   ec2/deploy.generic/root/mesos-ec2/mesos-slave-env.sh PRE-CREATION
>   ec2/deploy.generic/root/mesos-ec2/redeploy-mesos PRE-CREATION
>   ec2/deploy.generic/root/mesos-ec2/spark-env.sh PRE-CREATION
>   ec2/mesos-ec2 3bc5d6307b2759cec0cea430b44baa2809a6f2d2
>   ec2/mesos_ec2.py 94fd7b2181d138c97da609fb0ed2ebdc6206cfd6
>   ec2/setup-mesos-ami.sh PRE-CREATION
>
> Diff: https://reviews.apache.org/r/11914/diff/
>
>
> Testing (updated)
> -------
>
> I've done some very minimal testing that Spark runs over Mesos and can
> write to the ephemeral HDFS instance and that the Hadoop jobtracker can
> connect to Mesos and the ephemeral HDFS.
>
>
> Thanks,
>
> Charles Reiss
>
>

Reply via email to