Hi Ken!

Good to know you like our charms and bundles! Are you working with Andrew &
Angie?

I have been talking with them several times, so I have a little bit of
background on your use cases. Let me know if you want to do a short hangout
to discuss your specific workload.

Specifically, if you want to use Spark in conjunction with Hadoop, you
probably want to deploy it on the same node as your YARN master. So
assuming you deployed it and named it yarn-master, you can do: (install jq
first with "sudo apt-get install jq")

 TARGET_MACHINE=$(juju stat | python -c 'import sys, yaml, json;
json.dump(yaml.load(sys.stdin), sys.stdout, indent=4)' | jq
'.services."yarn-master".units."yarn-master/0".machine' | tr -d "\"" )

==> This command will output the ID of the machine running the YARN master.

Then

juju deploy --to $TARGET_MACHINE cs:~asanjar/trusty/spark spark-master

Then you'll be able to read from Hadoop into Spark.



Best,
Samuel





Best,
Samuel

--
Samuel Cozannet
Cloud, Big Data and IoT Strategy Team
Business Development - Cloud and ISV Ecosystem
Changing the Future of Cloud
Ubuntu <http://ubuntu.com>  / Canonical UK LTD <http://canonical.com> / Juju
<https://jujucharms.com>
samuel.cozan...@canonical.com
mob: +33 616 702 389
skype: samnco
Twitter: @SaMnCo_23

On Wed, Jan 28, 2015 at 4:44 PM, Ken Williams <ke...@theasi.co> wrote:

>
> Hi folks,
>
> I'm completely new to juju so any help is appreciated.
>
> I'm trying to create a hadoop/analytics-type platform.
>
> I've managed to install the 'data-analytics-with-sql-like' bundle
> (using this command)
>
>     juju quickstart
> bundle:data-analytics-with-sql-like/data-analytics-with-sql-like
>
> This is very impressive, and gives me virtually everything that I want
> (hadoop, hive, etc) - but I also need Spark.
>
> The Spark charm (http://manage.jujucharms.com/~asanjar/trusty/spark)
> and bundle (
> http://manage.jujucharms.com/bundle/~asanjar/spark/spark-cluster)
> however do not seem stable or available and I can't figure out how to
> install them.
>
> Should I just download and install the Spark tar-ball on the nodes
> in my AWS cluster, or is there a better way to do this ?
>
> Thanks in advance,
>
> Ken
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju

Reply via email to