Hi all,

Brand new to Spark and to big data technologies in general. Eventually I'd
like to contribute to the testing effort on Spark.

I have an ARM Chromebook at my disposal: that's it for the moment. I can
vouch that it's OK for sending Hive queries to an AWS EMR cluster via SQL
Workbench.

I ran the SparkPI example using the prebuilt Hadoop 2.4 package and got a
fatal error. I can post that error log if anyone wants to see it, but I
want to rule out the obvious cause.

Can anyone make recommendations as to minimum system requirements for using
Spark - for example, with an AWS EMR cluster? I didn't see any on the Spark
site.

Thanks,

Amy Brown

Reply via email to