Hi
Spark provides spark.local.dir configuration to specify work folder on the
pod. You can specify spark.local.dir as your mount path.
Best regards
Manoj GEORGE 于2022年9月1日周四 21:16写道:
> CONFIDENTIAL & RESTRICTED
>
> Hi Team,
>
>
>
> I am new to spark, so please excuse my ignorance.
>
>
>
> Curre
Hi George,
You can try mounting a larger PersistentVolume to the work directory as
described here instead of using localdir which might have site-specific size
constraints:
https://spark.apache.org/docs/latest/running-on-kubernetes.html#using-kubernetes-volumes
-Matt
> On Sep 1, 2022, at 09:1
CONFIDENTIAL & RESTRICTED
Hi Team,
I am new to spark, so please excuse my ignorance.
Currently we are trying to run PySpark on Kubernetes cluster. The setup is
working fine for some jobs, but when we are processing a large file ( 36 gb),
we run into one of space issues.
Based on what was fou
Thanks Sean.
Kind Regards,
Sachit Murarka
On Mon, Mar 8, 2021 at 6:23 PM Sean Owen wrote:
> It's there in the error: No space left on device
> You ran out of disk space (local disk) on one of your machines.
>
> On Mon, Mar 8, 2021 at 2:02 AM Sachit Murarka
> wrote:
>
It's there in the error: No space left on device
You ran out of disk space (local disk) on one of your machines.
On Mon, Mar 8, 2021 at 2:02 AM Sachit Murarka
wrote:
> Hi All,
>
> I am getting the following error in my spark job.
>
> Can someon
failed 4 times, most recent failure: Lost task 0.3 in stage
>> 41.0 (TID 80817, executor 193): com.esotericsoftware.kryo.KryoException:
>> java.io.IOException: No space left on device\n\tat
>> com.esotericsoftware.kryo.io.Output.flush(Output.java:188)\n\tat
>> com.esotericsoftware.k
t failure: Lost task 0.3 in stage
> 41.0 (TID 80817, executor 193): com.esotericsoftware.kryo.KryoException:
> java.io.IOException: No space left on device\n\tat
> com.esotericsoftware.kryo.io.Output.flush(Output.java:188)\n\tat
> com.esotericsoftware.kryo.io.O
): com.esotericsoftware.kryo.KryoException:
java.io.IOException: No space left on device\n\tat
com.esotericsoftware.kryo.io.Output.flush(Output.java:188)\n\tat
com.esotericsoftware.kryo.io.Output.require(Output.java:164)\n\tat
com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:251)\n\tat
stand the root cause though. I'll be happy to
>>> hear deeper insight as well.
>>>
>>> On Mon, Aug 20, 2018 at 7:08 PM, Steve Lewis
>>> wrote:
>>>
>>>>
>>>> We are trying to run a job that has previously run on Spark 1.3 on a
&g
> We are trying to run a job that has previously run on Spark 1.3 on a
>>> different cluster. The job was converted to 2.3 spark and this is a new
>>> cluster.
>>>
>>> The job dies after completing about a half dozen stages with
>>>
>>>
previously run on Spark 1.3 on a
>> different cluster. The job was converted to 2.3 spark and this is a new
>> cluster.
>>
>> The job dies after completing about a half dozen stages with
>>
>> java.io.IOException: No space left on device
>>
>>
>>
ob that has previously run on Spark 1.3 on a
> different cluster. The job was converted to 2.3 spark and this is a new
> cluster.
>
> The job dies after completing about a half dozen stages with
>
> java.io.IOException: No space left on device
>
>
>It appears tha
We are trying to run a job that has previously run on Spark 1.3 on a
different cluster. The job was converted to 2.3 spark and this is a
new cluster.
The job dies after completing about a half dozen stages with
java.io.IOException: No space left on device
It appears that the nodes are
java.nio.file.FileSystemException:
>> /tmp/spark-523d5331-3884-440c-ac0d-f46838c2029f/executor-390c9cd7-217e-42f3-97cb-fa2734405585/spark-206d92c0-f0d3-443c-97b2-39494e2c5fdd/-4230744641534510169119_cache
>> -> ./PublishGainersandLosers-1.0-SNAPSHOT
ge
> 0.0 (TID 6, 172.29.62.145, executor 0): java.nio.file.FileSystemException:
> /tmp/spark-523d5331-3884-440c-ac0d-f46838c2029f/executor-390c9cd7-217e-42f3-97cb-fa2734405585/spark-206d92c0-f0d3-443c-97b2-39494e2c5fdd/-4230744641534510169119_cache
> -> ./PublishGainersandLosers-1.0-SNAPSHOT-shad
/-4230744641534510169119_cache
-> ./PublishGainersandLosers-1.0-SNAPSHOT-shaded-Gopal.jar: No space left on
device
at
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
ch directory, i.e. SPARK_LOCAL_DIRS , to be one
> > having 100 GB space.
> > The heirarchical dataset, whose size is (< 400kB), remains constant
> > throughout the iterations.
> > I have tried the worker cleanup flag but it has no effect i.e.
> > "spark.worke
> I have tried the worker cleanup flag but it has no effect i.e.
> "spark.worker.cleanup.enabled=true"
>
>
>
> Error :
> Caused by: java.io.IOException: No space left on device
> at java.io.FileOutputStream.writeBytes(Native Met
. SPARK_LOCAL_DIRS , to be one
having 100 GB space.
The heirarchical dataset, whose size is (< 400kB), remains constant
throughout the iterations.
I have tried the worker cleanup flag but it has no effect i.e.
"spark.worker.cleanup.enabled=true"
Error :
Caused by: java.io.IOException: No space l
t;> did you try calling rdd.unpersist()
>>
>> On Wed, Oct 18, 2017 at 10:04 AM, Mina Aslani
>> wrote:
>>
>>> Hi,
>>>
>>> I get "No space left on device" error in my spark worker:
>>>
>>> Error writing stream to file /usr/s
I have not tried rdd.unpersist(), I thought using rdd = null is the same,
is it not?
On Wed, Oct 18, 2017 at 1:07 AM, Imran Rajjad wrote:
> did you try calling rdd.unpersist()
>
> On Wed, Oct 18, 2017 at 10:04 AM, Mina Aslani
> wrote:
>
>> Hi,
>>
>> I get &quo
did you try calling rdd.unpersist()
On Wed, Oct 18, 2017 at 10:04 AM, Mina Aslani wrote:
> Hi,
>
> I get "No space left on device" error in my spark worker:
>
> Error writing stream to file /usr/spark-2.2.0/work/app-.../0/stderr
> java.io.IOException: No space le
Process data in micro batch
On 18-Oct-2017 10:36 AM, "Chetan Khatri"
wrote:
> Your hard drive don't have much space
> On 18-Oct-2017 10:35 AM, "Mina Aslani" wrote:
>
>> Hi,
>>
>> I get "No space left on device" error in my spark wo
Your hard drive don't have much space
On 18-Oct-2017 10:35 AM, "Mina Aslani" wrote:
> Hi,
>
> I get "No space left on device" error in my spark worker:
>
> Error writing stream to file /usr/spark-2.2.0/work/app-.../0/stderr
> java.io.IOExcepti
Hi,
I get "No space left on device" error in my spark worker:
Error writing stream to file /usr/spark-2.2.0/work/app-.../0/stderr
java.io.IOException: No space left on device
In my spark cluster, I have one worker and one master.
My program consumes stream of data from kafka and
September 2015 12:27 AM
To: Jack Yang
Cc: Ted Yu; Andy Huang; user@spark.apache.org
Subject: Re: No space left on device when running graphx job
Would you mind sharing what your solution was? It would help those on the forum
who might run into the same problem. Even it it’s a silly ‘gotcha’ it
Hi all,
I resolved the problems.
Thanks folk.
Jack
From: Jack Yang [mailto:j...@uow.edu.au]
Sent: Friday, 25 September 2015 9:57 AM
To: Ted Yu; Andy Huang
Cc: user@spark.apache.org
Subject: RE: No space left on device when running graphx job
Also, please see the screenshot below from spark web
Subject: RE: No space left on device when running graphx job
Hi, here is the full stack trace:
15/09/25 09:50:14 WARN scheduler.TaskSetManager: Lost task 21088.0 in stage 6.0
(TID 62230, 192.168.70.129): java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes
Hi, here is the full stack trace:
15/09/25 09:50:14 WARN scheduler.TaskSetManager: Lost task 21088.0 in stage 6.0
(TID 62230, 192.168.70.129): java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write
gt; Basically, I load data using GraphLoader.edgeListFile mthod and then count
>> number of nodes using: graph.vertices.count() method.
>>
>> The problem is :
>>
>>
>>
>> Lost task 11972.0 in stage 6.0 (TID 54585, 192.168.70.129):
>
+ 4G memory + 4 CPU
> cores)
>
> Basically, I load data using GraphLoader.edgeListFile mthod and then count
> number of nodes using: graph.vertices.count() method.
>
> The problem is :
>
>
>
> *Lost task 11972.0 in stage 6.0 (TID 54585, 192.168.70.129):
> java.io.
):
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:345)
when I try a small amount of data, the code is working. So I guess the error
comes from the amount of data.
This is how
zhen :
>>
>>> Has anyone seen this error? Not sure which dir the program was trying to
>>> write to.
>>>
>>> I am running Spark 1.4.1, submitting Spark job to Yarn, in yarn-client
>>> mode.
>>>
>>> 15/09/04 21:36:06 ERROR SparkCo
k job to Yarn, in yarn-client
>> mode.
>>
>> 15/09/04 21:36:06 ERROR SparkContext: Error adding jar
>> (java.io.IOException: No space left on device), was the --addJars option
>> used?
>>
>> 15/09/04 21:36:08 ERROR SparkContext: Error initializing SparkC
ng Spark 1.4.1, submitting Spark job to Yarn, in yarn-client
> mode.
>
> 15/09/04 21:36:06 ERROR SparkContext: Error adding jar
> (java.io.IOException: No space left on device), was the --addJars option
> used?
>
> 15/09/04 21:36:08 ERROR SparkContext: Error initializing SparkCont
Has anyone seen this error? Not sure which dir the program was trying to
write to.
I am running Spark 1.4.1, submitting Spark job to Yarn, in yarn-client mode.
15/09/04 21:36:06 ERROR SparkContext: Error adding jar
(java.io.IOException: No space left on device), was the --addJars option
used
Srinivasan <
> devathecool1...@gmail.com> wrote:
>
>> Hi ,
>>
>> I am trying to run an ETL on spark which involves expensive shuffle
>> operation. Basically I require a self-join to be performed on a
>> sparkDataFrame RDD . The job runs fine for around 15 hours and
ffles,spark
> requires more space to write the shuffle files. Hence I configured
> *spark.local.dir* to point to a different directory which has *1TB* of
> space. But still I get the same *no space left exception*. What could be
> the root cause of this issue?
>
>
> Thanks in a
ption:
No space left on device"*. I initially thought this could be due to
*spark.local.dir* pointing to */tmp* directory which was configured with
*2GB* of space, since this job requires expensive shuffles,spark requires
more space to write the shuffle files. Hence I configured *spark.loc
by SPARK_LOCAL_DIRS (Standalone, Mesos) or
>> LOCAL_DIRS (YARN) environment variables set by the cluster manager.
>>
>> 2015-05-06 20:35 GMT+08:00 Yifan LI :
>>
>>> Hi,
>>>
>>> I am running my graphx application on Spark, but it failed since there
&
.
>>
>> 2015-05-06 20:35 GMT+08:00 Yifan LI > <mailto:iamyifa...@gmail.com>>:
>> Hi,
>>
>> I am running my graphx application on Spark, but it failed since there is an
>> error on one executor node(on which available hdfs space is small) that “n
L_DIRS (Standalone, Mesos) or
> LOCAL_DIRS (YARN) environment variables set by the cluster manager.
>
> 2015-05-06 20:35 GMT+08:00 Yifan LI :
>
>> Hi,
>>
>> I am running my graphx application on Spark, but it failed since there is
>> an error on one executor node(on w
the cluster manager.
>
> 2015-05-06 20:35 GMT+08:00 Yifan LI <mailto:iamyifa...@gmail.com>>:
> Hi,
>
> I am running my graphx application on Spark, but it failed since there is an
> error on one executor node(on which available hdfs space is small) that “no
> spa
set by the cluster manager.
2015-05-06 20:35 GMT+08:00 Yifan LI :
> Hi,
>
> I am running my graphx application on Spark, but it failed since there is
> an error on one executor node(on which available hdfs space is small) that
> “no space left on device”.
>
> I can understand why
Hi,
I am running my graphx application on Spark, but it failed since there is an
error on one executor node(on which available hdfs space is small) that “no
space left on device”.
I can understand why it happened, because my vertex(-attribute) rdd was
becoming bigger and bigger during
It could be filling up your /tmp directory. You need to set your
spark.local.dir or you can also specify SPARK_WORKER_DIR to another
location which has sufficient space.
Thanks
Best Regards
On Mon, May 4, 2015 at 7:27 PM, shahab wrote:
> Hi,
>
> I am getting "No space left on dev
See
https://wiki.gentoo.org/wiki/Knowledge_Base:No_space_left_on_device_while_there_is_plenty_of_space_available
What's the value for spark.local.dir property ?
Cheers
On Mon, May 4, 2015 at 6:57 AM, shahab wrote:
> Hi,
>
> I am getting "No space left on device&qu
Hi,
I am getting "No space left on device" exception when doing repartitioning
of approx. 285 MB of data while these is still 2 GB space left ??
does it mean that repartitioning needs more space (more than 2 GB) for
repartitioning of 285 MB of data ??
best,
/Shahab
java.io.IOExc
Sorry I put the log messages when creating the thread in
http://apache-spark-user-list.1001560.n3.nabble.com/java-io-IOException-No-space-left-on-device-td22702.html
but I forgot that raw messages will not be sent in emails.
So this is the log related to the error :
15/04/29 02:48:50 INFO
t,the training data is a file containing 156060 (size 8.1M).
>>
>> The problem is that when trying to presist a partition into memory and
>> there
>> is not enought memory, the partition is persisted on disk and despite
>> Having
>> 229G of free disk space, I
(size 8.1M).
>>>
>>> The problem is that when trying to presist a partition into memory and
>>> there
>>> is not enought memory, the partition is persisted on disk and despite
>>> Having
>>> 229G of free disk space, I got " No space left on devi
al[2] --driver-memory 5g ml_pipeline.jar labeledTrainData.tsv
>> testData.tsv
>>
>> And this is a part of the log:
>>
>>
>>
>> If you need more informations, please let me know.
>> Thanks
>>
>>
>>
>> --
>
.
>
> The problem is that when trying to presist a partition into memory and
> there
> is not enought memory, the partition is persisted on disk and despite
> Having
> 229G of free disk space, I got " No space left on device"..
>
> This is how I
aving
229G of free disk space, I got " No space left on device"..
This is how I'm running the program :
./spark-submit --class com.custom.sentimentAnalysis.MainPipeline --master
local[2] --driver-memory 5g ml_pipeline.jar labeledTrainData.tsv
testData.tsv
And this is a part of the
1 (TID 12572, ip-10-81-151-40.ec2.internal):
> java.io.FileNotFoundException: /mnt/spa$
> k/spark-local-20140827191008-05ae/0c/shuffle_1_7570_5768 (No space left on
> device)
> java.io.FileOutputStream.open(Native Method)
>
0-81-151-40.ec2.internal):
java.io.FileNotFoundException: /mnt/spa$
k/spark-local-20140827191008-05ae/0c/shuffle_1_7570_5768 (No space left on
device)
java.io.FileOutputStream.open(Native Method)
java.io.FileOutputStream.(FileOutputStream.java:221)
org.apache.spark.storage.DiskBlockOb
l.com>>
Date: Saturday, August 9, 2014 at 1:48 PM
To: "u...@spark.incubator.apache.org<mailto:u...@spark.incubator.apache.org>"
mailto:u...@spark.incubator.apache.org>>,
kmatzen mailto:kmat...@gmail.com>>
Subject: Re: No space left on device
Your map-only job shou
k/logs
directory to someplace on /mnt instead. If it's /tmp, you can set
java.io.tmpdir to another directory in Spark's JVM options.
Matei
On August 8, 2014 at 11:02:48 PM, kmatzen (kmat...@gmail.com) wrote:
I need some configuration / debugging recommendations to work around &q
I need some configuration / debugging recommendations to work around "no
space left on device". I am completely new to Spark, but I have some
experience with Hadoop.
I have a task where I read images stored in sequence files from s3://,
process them with a map in scala, and write the r
t;> >> >>> > during
>> >> >>> > the
>> >> >>> > initialization of the application:
>> >> >>> >
>> >> >>> > 14/07/16 06:56:08 INFO storage.DiskBlockManager: Created local
>> >>
gt; Thanks,
> >> >>> > Chris
> >> >>> >
> >> >>> >
> >> >>> >
> >> >>> > On Tue, Jul 15, 2014 at 11:44 PM, Chris Gore
> >> >>> > wrote:
> >> >>> >>
> &g
; >
> >> >>> > Thanks,
> >> >>> > Chris
> >> >>> >
> >> >>> >
> >> >>> >
> >> >>> > On Tue, Jul 15, 2014 at 11:44 PM, Chris Gore
> >> >>&g
t;> > wrote:
>> >>> >>
>> >>> >> Hi Chris,
>> >>> >>
>> >>> >> I've encountered this error when running Spark’s ALS methods too.
>> >>> >> In
>> >>> >> my
>
> was a shuffle, it would spill many GB of data onto the local drive.
> >>> >> What
> >>> >> fixed it was setting it to use the /mnt directory, where a network
> >>> >> drive is
> >>> >> mounted. For example, setting an environmental variable:
&g
t; mounted. For example, setting an environmental variable:
>>> >>
>>> >> export SPACE=$(mount | grep mnt | awk '{print $3"/spark/"}' | xargs |
>>> >> sed
>>> >> 's/ /,/g’)
>>> >>
>>> >&g
-Dspark.local.dir=$SPACE or simply
>> >> -Dspark.local.dir=/mnt/spark/,/mnt2/spark/ when you run your driver
>> >> application
>> >>
>> >> Chris
>> >>
>> >> On Jul 15, 2014, at 11:39 PM, Xiangrui Meng wrote:
>> >>
>> >> >
run your driver
> >> application
> >>
> >> Chris
> >>
> >> On Jul 15, 2014, at 11:39 PM, Xiangrui Meng wrote:
> >>
> >> > Check the number of inodes (df -i). The assembly build may create many
> >> > small files. -Xiang
Meng wrote:
>>
>> > Check the number of inodes (df -i). The assembly build may create many
>> > small files. -Xiangrui
>> >
>> > On Tue, Jul 15, 2014 at 11:35 PM, Chris DuBois
>> > wrote:
>> >> Hi all,
>> >>
>>
bly build may create many
> > small files. -Xiangrui
> >
> > On Tue, Jul 15, 2014 at 11:35 PM, Chris DuBois
> wrote:
> >> Hi all,
> >>
> >> I am encountering the following error:
> >>
> >> INFO scheduler.TaskSetManager: Loss was d
>> I am encountering the following error:
>>
>> INFO scheduler.TaskSetManager: Loss was due to java.io.IOException: No space
>> left on device [duplicate 4]
>>
>> For each slave, df -h looks roughtly like this, which makes the above error
>> surprising.
>>
>> File
The assembly build may create many
> small files. -Xiangrui
>
> On Tue, Jul 15, 2014 at 11:35 PM, Chris DuBois
> wrote:
> > Hi all,
> >
> > I am encountering the following error:
> >
> > INFO scheduler.TaskSetManager: Loss was due to java.io.IOExceptio
Check the number of inodes (df -i). The assembly build may create many
small files. -Xiangrui
On Tue, Jul 15, 2014 at 11:35 PM, Chris DuBois wrote:
> Hi all,
>
> I am encountering the following error:
>
> INFO scheduler.TaskSetManager: Loss was due to java.io.IOException: No
Hi all,
I am encountering the following error:
INFO scheduler.TaskSetManager: Loss was due to java.io.IOException: No
space left on device [duplicate 4]
For each slave, df -h looks roughtly like this, which makes the above error
surprising.
FilesystemSize Used Avail Use% Mounted
tly add this in the spark-ec2 script.
Writing lots of tmp files in the 8GB `/` is not a great idea.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/No-space-left-on-device-error-when-pulling-data-from-s3-tp5450p5518.html
Sent from the Apache Spark User Li
al.dir=/mnt/spark"
>
> Thanks
> Best Regards
>
>
> On Tue, May 6, 2014 at 9:35 PM, Han JU wrote:
>
>> Hi,
>>
>> I've a `no space left on device` exception when pulling some 22GB data
>> from s3 block storage to the ephemeral HDFS. The cluster is on
still write temp files to /tmp/hadoop-root ?
2014-05-06 18:05 GMT+02:00 Han JU :
> Hi,
>
> I've a `no space left on device` exception when pulling some 22GB data
> from s3 block storage to the ephemeral HDFS. The cluster is on EC2 using
> spark-ec2 script with 4 m1.large.
I wonder why is your / is full. Try clearing out /tmp and also make sure in
the spark-env.sh you have put SPARK_JAVA_OPTS+="
-Dspark.local.dir=/mnt/spark"
Thanks
Best Regards
On Tue, May 6, 2014 at 9:35 PM, Han JU wrote:
> Hi,
>
> I've a `no space left on device` exc
Hi,
I've a `no space left on device` exception when pulling some 22GB data from
s3 block storage to the ephemeral HDFS. The cluster is on EC2 using
spark-ec2 script with 4 m1.large.
The code is basically:
val in = sc.textFile("s3://...")
in.saveAsTextFile("hdfs://...&
weren't enough
inodes and this was causing a "No space left on device" error? Is that
correct? If so, that's good to know because it's definitely counter
intuitive.
On Sun, Mar 23, 2014 at 8:36 PM, Ognen Duzlevski
wrote:
I would love to work on this (and other) stuff if I
thread).
What is interesting is that only two out of the 16 slaves had this
problem :)
Ognen
On 3/24/14, 12:57 AM, Patrick Wendell wrote:
Ognen - just so I understand. The issue is that there weren't enough
inodes and this was causing a "No space left on device" error? Is that
Ognen - just so I understand. The issue is that there weren't enough
inodes and this was causing a "No space left on device" error? Is that
correct? If so, that's good to know because it's definitely counter
intuitive.
On Sun, Mar 23, 2014 at 8:36 PM, Ognen Duzlevski
I would love to work on this (and other) stuff if I can bother someone
with questions offline or on a dev mailing list.
Ognen
On 3/23/14, 10:04 PM, Aaron Davidson wrote:
Thanks for bringing this up, 100% inode utilization is an issue I
haven't seen raised before and this raises another issue wh
Thanks for bringing this up, 100% inode utilization is an issue I haven't
seen raised before and this raises another issue which is not on our
current roadmap for state cleanup (cleaning up data which was not fully
cleaned up from a crashed process).
On Sun, Mar 23, 2014 at 7:57 PM, Ognen Duzlevs
Bleh, strike that, one of my slaves was at 100% inode utilization on the
file system. It was /tmp/spark* leftovers that apparently did not get
cleaned up properly after failed or interrupted jobs.
Mental note - run a cron job on all slaves and master to clean up
/tmp/spark* regularly.
Thanks (
Aaron, thanks for replying. I am very much puzzled as to what is going
on. A job that used to run on the same cluster is failing with this
mysterious message about not having enough disk space when in fact I can
see through "watch df -h" that the free space is always hovering around
3+GB on the
By default, with P partitions (for both the pre-shuffle stage and
post-shuffle), there are P^2 files created.
With spark.shuffle.consolidateFiles turned on, we would instead create only
P files. Disk space consumption is largely unaffected, however. by the
number of partitions unless each partition
On 3/23/14, 5:49 PM, Matei Zaharia wrote:
You can set spark.local.dir to put this data somewhere other than /tmp
if /tmp is full. Actually it’s recommended to have multiple local
disks and set to to a comma-separated list of directories, one per disk.
Matei, does the number of tasks/partitions i
On 3/23/14, 5:35 PM, Aaron Davidson wrote:
On some systems, /tmp/ is an in-memory tmpfs file system, with its own
size limit. It's possible that this limit has been exceeded. You might
try running the "df" command to check to free space of "/tmp" or root
if tmp isn't listed.
3 GB also seems
tion: Job aborted: Task
> 167.0:3 failed 4 times (most recent failure: Exception failure:
> java.io.FileNotFoundException:
> /tmp/spark-local-20140323214638-72df/31/shuffle_31_3_127 (No space left on
> device))
> org.apache.spark.SparkException: Job aborted: Task 167.0:3 failed 4 times
> (most r
.0:3 failed 4 times (most recent failure: Exception failure:
> java.io.FileNotFoundException:
> /tmp/spark-local-20140323214638-72df/31/shuffle_31_3_127
> (No space left on device))
> org.apache.spark.SparkException: Job aborted: Task 167.0:3 failed 4 times
> (most recent failure: Exception failure
4 times (most recent failure: Exception failure:
java.io.FileNotFoundException:
/tmp/spark-local-20140323214638-72df/31/shuffle_31_3_127 (No space left
on device))
org.apache.spark.SparkException: Job aborted: Task 167.0:3 failed 4
times (most recent failure: Exception failure
91 matches
Mail list logo