P.S. This is working on a remote spark cluster in AWS. Spark 1.3.0 or 1.3.1. 
Fails in both.

> On 23 Jun 2015, at 19:25, Wood, Dean Jr (GE Oil & Gas) <dean1.w...@ge.com> 
> wrote:
> 
> Hi,
> 
> I’ve tried to send this before but it didn’t come through.
> 
> I have a fairly simple set of commands which work in the spark shell. 
> However, they do not work on zeppelin. The commands are:
> 
> val wbIPR=Array(8.40,      8.09,      7.56,      6.83,      5.88,      4.72,  
>     3.36,      1.78,      0.0)
> val 
> pressureIPR=Array(0.00,10000000.00,20000000.00,30000000.00,40000000.00,50000000.00,60000000.00,70000000.00,80000000.00)
> case class IPR (w:Array[Double],p:Array[Double])
> val iprInit=IPR(wbIPR,pressureIPR)
> val iprList=sc.parallelize((1.0 to 0.9 by -0.001).map(k => 
> IPR(iprInit.w.map(_ * k), iprInit.p)).zipWithIndex,5)
> iprList.count()
> 
> The error message is:
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 
> (TID 54, ip-172-31-7-206.ec2.internal): ExecutorLostFailure (executor 14 
> lost) Driver stacktrace: at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1203)
>  at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
>  at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1191)
> 
> Any ideas why this could be failing? I don’t think it’s a compile issue, 
> because if I run
> 
> val iprList=sc.parallelize(pressureIPR)
> iprList.count()
> 
> everything works.
> 
> Thanks
> 
> Dean

Reply via email to