With respect to virtual hosts, my team uses Vagrant/Virtualbox. We have 3
CentOS VMs with 4 GB RAM each - 2 worker nodes and a master node.
Everything works fine, though if you are using MapR, you have to make sure
they are all on the same subnet.
-Suren
On Fri, May 30, 2014 at 12:20 PM,
Also, the Spark examples can run out of the box on a single machine, as
well as a cluster. See the Master URLs heading here:
http://spark.apache.org/docs/latest/submitting-applications.html#master-urls
On Fri, May 30, 2014 at 9:24 AM, Surendranauth Hiraman
suren.hira...@velos.io wrote:
With
Running Hadoop and HDFS on unsupported JVM runtime sounds a little
adventurous. But as long as Spark can run in a separate Java 8 runtime it's
all good. I think having lambdas and type inference is huge when writing
these jobs and using Scala (paying the price of complexity, poor tooling
etc etc)
Hi Kristoffer,
You're correct that CDH5 only supports up to Java 7 at the moment. But
Yarn apps do not run in the same JVM as Yarn itself (and I believe MR1
doesn't either), so it might be possible to pass arguments in a way
that tells Yarn to launch the application master / executors with the
I think the distinction there might be they never said they ran that code
under CDH5, just that spark supports it and spark runs under CDH5. Not that
you can use these features while running under CDH5.
They could use mesos or the standalone scheduler to run them
On Tue, May 6, 2014 at 6:16 AM,
Java 8 support is a feature in Spark, but vendors need to decide for themselves
when they’d like support Java 8 commercially. You can still run Spark on Java 7
or 6 without taking advantage of the new features (indeed our builds are always
against Java 6).
Matei
On May 6, 2014, at 8:59 AM,