Hi,

I submitted a spark job to an ec2 cluster, using spark-submit.  At a worker
node, there is an exception of 'no space left on device' as follows.  

==========================================
15/02/08 01:53:38 ERROR logging.FileAppender: Error writing stream to file
/root/spark/work/app-20150208014557-0003/0/stdout
java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:345)
        at
org.apache.spark.util.logging.FileAppender.appendToFile(FileAppender.scala:92)
        at
org.apache.spark.util.logging.FileAppender.appendStreamToFile(FileAppender.scala:72)
        at
org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply$mcV$sp(FileAppender.scala:39)
        at
org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39)
        at
org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39)
        at
org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1311)
        at
org.apache.spark.util.logging.FileAppender$$anon$1.run(FileAppender.scala:38)
===========================================

The command df showed the following information at the worker node:

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvda1             8256920   8256456         0 100% /
tmpfs                  7752012         0   7752012   0% /dev/shm
/dev/xvdb             30963708   1729652  27661192   6% /mnt

Does anybody know how to fix this?  Thanks.


Ey-Chih Chow



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/no-space-left-at-worker-node-tp21545.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to