Hi,
I'm running Spark in standalone mode: 1 master, 15 slaves. I started the node
with the ec2 script, and I'm currently breaking the job into many small parts
(~2,000) to better examine progress and failure.
Pretty basic - submitting a PySpark job (via spark-submit) to the cluster. The
job
Nevermind - I don't know what I was thinking with the below. It's just
maxTaskFailures causing the job to failure.
From: Griffiths, Michael (NYC-RPM) [mailto:michael.griffi...@reprisemedia.com]
Sent: Monday, November 10, 2014 4:48 PM
To: user@spark.apache.org
Subject: Spark Master crashes job
Hi,
I'm running into an error on Windows (x64, 8.1) running Spark 1.1.0 (pre-builet
for Hadoop 2.4:
spark-1.1.0-bin-hadoop2.4.tgzhttp://d3kbcqa49mib13.cloudfront.net/spark-1.1.0-bin-hadoop2.4.tgz)
with Java SE Version 8 Update 20 (build 1.8.0_20-b26); just getting started
with Spark.
When