Hi,
As the ec2 launch script provided by spark uses
https://github.com/mesos/spark-ec2 to download and configure all the tools
in the cluster (spark, hadoop etc). You can create your own git repository
to achieve your goal. More precisely:
1. Upload your own version of spark in s3 at address path to your spark
2. Fork https://github.com/mesos/spark-ec2 and make a change in
./spark/init.sh (add wget path to your spark)
3. Change line 638 in ec2 launch script: git clone your repository in
github
Hope this can be helpful.
Cheers
Gen
On Tue, Jan 6, 2015 at 11:51 PM, Ganon Pierce ganon.pie...@me.com wrote:
Is there a way to use the ec2 launch script with a locally built version
of spark? I launch and destroy clusters pretty frequently and would like to
not have to wait each time for the master instance to compile the source as
happens when I set the -v tag with the latest git commit. To be clear, I
would like to launch a non-release version of spark compiled locally as
quickly as I can launch a release version (e.g. -v 1.2.0) which does not
have to be compiled upon launch.
Up to this point, I have just used the launch script included with the
latest release to set up the cluster and then manually replaced the
assembly file on the master and slaves with the version I built locally and
then stored on s3. Is there anything wrong with doing it this way? Further,
is there a better or more standard way of accomplishing this?
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org