Thanks Michael.  I didn't edit core-site.xml.  We use the default one.  I only 
saw hdaoop.tmp.dir in core-site.xml, pointing to /mnt/ephemeral-hdfs.  How can 
I edit the config file?
Best regards,
Ey-Chih

Date: Sun, 8 Feb 2015 16:51:32 +0000
From: m_albert...@yahoo.com
To: gen.tan...@gmail.com; eyc...@hotmail.com
CC: user@spark.apache.org
Subject: Re: no space left at worker node

You might want to take a look in core-site.xml, andsee what is listed as usable 
directories (hadoop.tmp.dir, fs.s3.buffer.dir).
It seems that on S3, the root disk is relatively small (8G), but the config 
files list a "mnt" directory under it.  Somehow the system doesn't balance 
between the very small space it has under the root disk and the larger disks, 
so the root disk fills up while the others are unused.
At my site, we wrote a boot script to edit these problem out of the config 
before hadoop starts.
-Mike
        From: gen tang <gen.tan...@gmail.com>
 To: ey-chih chow <eyc...@hotmail.com> 
Cc: "user@spark.apache.org" <user@spark.apache.org> 
 Sent: Sunday, February 8, 2015 6:09 AM
 Subject: Re: no space left at worker node
   
Hi,I fact, I met this problem before. it is a bug of AWS. Which type of machine 
do you use?If I guess well, you can check the file /etc/fstab. There would be a 
double mount of /dev/xvdb.If yes, you should1. stop hdfs2. umount /dev/xvdb at 
/ 3. restart hdfsHope this could be helpful.CheersGen

On Sun, Feb 8, 2015 at 8:16 AM, ey-chih chow <eyc...@hotmail.com> wrote:Hi,

I submitted a spark job to an ec2 cluster, using spark-submit.  At a worker
node, there is an exception of 'no space left on device' as follows.

==========================================
15/02/08 01:53:38 ERROR logging.FileAppender: Error writing stream to file
/root/spark/work/app-20150208014557-0003/0/stdout
java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:345)
        at
org.apache.spark.util.logging.FileAppender.appendToFile(FileAppender.scala:92)
        at
org.apache.spark.util.logging.FileAppender.appendStreamToFile(FileAppender.scala:72)
        at
org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply$mcV$sp(FileAppender.scala:39)
        at
org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39)
        at
org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39)
        at
org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1311)
        at
org.apache.spark.util.logging.FileAppender$$anon$1.run(FileAppender.scala:38)
===========================================

The command df showed the following information at the worker node:

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvda1             8256920   8256456         0 100% /
tmpfs                  7752012         0   7752012   0% /dev/shm
/dev/xvdb             30963708   1729652  27661192   6% /mnt

Does anybody know how to fix this?  Thanks.


Ey-Chih Chow



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/no-space-left-at-worker-node-tp21545.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



                                          

Reply via email to