Thanks Cody.

I reported the core dump as an issue on the Spark JIRA and a developer
diagnosed it as an openJDK issue.

So I switched over to Oracle Java 8 and... no more core dumps on the
examples. I reported the openJDK issue at the icedtea Bugzilla.

Looks like I'm off and running with Spark on an ARM Chromebook!


On Sat, Dec 27, 2014 at 10:36 AM, Cody Koeninger <c...@koeninger.org> wrote:

> There are hardware recommendations at
> http://spark.apache.org/docs/latest/hardware-provisioning.html  but
> they're overkill for just testing things out.  You should be able to get
> meaningful work done with 2 m3large for instance.
>
> On Sat, Dec 27, 2014 at 8:27 AM, Amy Brown <testingwithf...@gmail.com>
> wrote:
>
>> Hi all,
>>
>> Brand new to Spark and to big data technologies in general. Eventually
>> I'd like to contribute to the testing effort on Spark.
>>
>> I have an ARM Chromebook at my disposal: that's it for the moment. I can
>> vouch that it's OK for sending Hive queries to an AWS EMR cluster via SQL
>> Workbench.
>>
>> I ran the SparkPI example using the prebuilt Hadoop 2.4 package and got a
>> fatal error. I can post that error log if anyone wants to see it, but I
>> want to rule out the obvious cause.
>>
>> Can anyone make recommendations as to minimum system requirements for
>> using Spark - for example, with an AWS EMR cluster? I didn't see any on the
>> Spark site.
>>
>> Thanks,
>>
>> Amy Brown
>>
>
>

Reply via email to