Set `hadoop.tmp.dir` in `spark-env.sh` solved the problem. Spark job no
longer writes tmp files in /tmp/hadoop-root/.

  SPARK_JAVA_OPTS+=" -Dspark.local.dir=/mnt/spark,/mnt2/spark
-Dhadoop.tmp.dir=/mnt/ephemeral-hdfs"
  export SPARK_JAVA_OPTS

I'm wondering if we need to permanently add this in the spark-ec2 script.
Writing lots of tmp files in the 8g `/` is not a gread idea.


2014-05-06 18:59 GMT+02:00 Akhil Das <ak...@sigmoidanalytics.com>:

> I wonder why is your / is full. Try clearing out /tmp and also make sure
> in the spark-env.sh you have put SPARK_JAVA_OPTS+="
> -Dspark.local.dir=/mnt/spark"
>
> Thanks
> Best Regards
>
>
> On Tue, May 6, 2014 at 9:35 PM, Han JU <ju.han.fe...@gmail.com> wrote:
>
>> Hi,
>>
>> I've a `no space left on device` exception when pulling some 22GB data
>> from s3 block storage to the ephemeral HDFS. The cluster is on EC2 using
>> spark-ec2 script with 4 m1.large.
>>
>> The code is basically:
>>   val in = sc.textFile("s3://...")
>>   in.saveAsTextFile("hdfs://...")
>>
>> Spark creates 750 input partitions based on the input splits, when it
>> begins throwing this exception, there's no space left on the root file
>> system on some worker machine:
>>
>> Filesystem           1K-blocks      Used Available Use% Mounted on
>> /dev/xvda1             8256952   8256952         0 100% /
>> tmpfs                  3816808         0   3816808   0% /dev/shm
>> /dev/xvdb            433455904  29840684 381596916   8% /mnt
>> /dev/xvdf            433455904  29437000 382000600   8% /mnt2
>>
>> Before the job begins, only 35% is used.
>>
>> Filesystem           1K-blocks      Used Available Use% Mounted on
>> /dev/xvda1             8256952   2832256   5340840  35% /
>> tmpfs                  3816808         0   3816808   0% /dev/shm
>> /dev/xvdb            433455904  29857768 381579832   8% /mnt
>> /dev/xvdf            433455904  29470104 381967496   8% /mnt2
>>
>>
>> Some suggestions on this problem? Does Spark caches/stores some data
>> before writing to HDFS?
>>
>>
>> Full stacktrace:
>> ---------------------
>> java.io.IOException: No space left on device
>> at java.io.FileOutputStream.writeBytes(Native Method)
>>  at java.io.FileOutputStream.write(FileOutputStream.java:345)
>> at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>>  at
>> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveBlock(Jets3tFileSystemStore.java:210)
>> at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>  at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>>  at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>  at com.sun.proxy.$Proxy8.retrieveBlock(Unknown Source)
>> at
>> org.apache.hadoop.fs.s3.S3InputStream.blockSeekTo(S3InputStream.java:160)
>>  at org.apache.hadoop.fs.s3.S3InputStream.read(S3InputStream.java:119)
>> at java.io.DataInputStream.read(DataInputStream.java:100)
>>  at org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
>> at
>> org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:92)
>>  at
>> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:51)
>> at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:156)
>>  at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
>> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
>>  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>>  at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
>>  at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>>  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
>> at org.apache.spark.scheduler.Task.run(Task.scala:53)
>>  at
>> org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
>> at
>> org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
>>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>  at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:744)
>>
>>
>> --
>> *JU Han*
>>
>> Data Engineer @ Botify.com
>>
>> +33 0619608888
>>
>
>


-- 
*JU Han*

Data Engineer @ Botify.com

+33 0619608888

Reply via email to