On some systems, /tmp/ is an in-memory tmpfs file system, with its own size
limit. It's possible that this limit has been exceeded. You might try
running the "df" command to check to free space of "/tmp" or root if tmp
isn't listed.

3 GB also seems pretty low for the remaining free space of a disk. If your
disk size is in the TB range, it's possible that the last couple GB have
issues when being allocated due to fragmentation or reclamation policies.


On Sun, Mar 23, 2014 at 3:06 PM, Ognen Duzlevski
<og...@nengoiksvelzud.com>wrote:

> Hello,
>
> I have a weird error showing up when I run a job on my Spark cluster. The
> version of spark is 0.9 and I have 3+ GB free on the disk when this error
> shows up. Any ideas what I should be looking for?
>
> [error] (run-main-0) org.apache.spark.SparkException: Job aborted: Task
> 167.0:3 failed 4 times (most recent failure: Exception failure:
> java.io.FileNotFoundException: 
> /tmp/spark-local-20140323214638-72df/31/shuffle_31_3_127
> (No space left on device))
> org.apache.spark.SparkException: Job aborted: Task 167.0:3 failed 4 times
> (most recent failure: Exception failure: java.io.FileNotFoundException:
> /tmp/spark-local-20140323214638-72df/31/shuffle_31_3_127 (No space left
> on device))
>     at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$
> apache$spark$scheduler$DAGScheduler$$abortStage$1.
> apply(DAGScheduler.scala:1028)
>     at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$
> apache$spark$scheduler$DAGScheduler$$abortStage$1.
> apply(DAGScheduler.scala:1026)
>     at scala.collection.mutable.ResizableArray$class.foreach(
> ResizableArray.scala:59)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>     at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$
> scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
>     at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> processEvent$10.apply(DAGScheduler.scala:619)
>     at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> processEvent$10.apply(DAGScheduler.scala:619)
>     at scala.Option.foreach(Option.scala:236)
>     at org.apache.spark.scheduler.DAGScheduler.processEvent(
> DAGScheduler.scala:619)
>     at org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$
> $anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
>
> Thanks!
> Ognen
>

Reply via email to