September 2015 12:27 AM
To: Jack Yang
Cc: Ted Yu; Andy Huang; user@spark.apache.org
Subject: Re: No space left on device when running graphx job
Would you mind sharing what your solution was? It would help those on the forum
who might run into the same problem. Even it it’s a silly ‘gotcha
Andy:
Can you show complete stack trace ?
Have you checked there are enough free inode on the .129 machine ?
Cheers
> On Sep 23, 2015, at 11:43 PM, Andy Huang wrote:
>
> Hi Jack,
>
> Are you writing out to disk? Or it sounds like Spark is spilling to disk (RAM
>
Hi folk,
I have an issue of graphx. (spark: 1.4.0 + 4 machines + 4G memory + 4 CPU cores)
Basically, I load data using GraphLoader.edgeListFile mthod and then count
number of nodes using: graph.vertices.count() method.
The problem is :
Lost task 11972.0 in stage 6.0 (TID 54585, 192.168.70.129):
Hi Jack,
Are you writing out to disk? Or it sounds like Spark is spilling to disk
(RAM filled up) and it's running out of disk space.
Cheers
Andy
On Thu, Sep 24, 2015 at 4:29 PM, Jack Yang wrote:
> Hi folk,
>
>
>
> I have an issue of graphx. (spark: 1.4.0 + 4 machines + 4G
Hi all,
I resolved the problems.
Thanks folk.
Jack
From: Jack Yang [mailto:j...@uow.edu.au]
Sent: Friday, 25 September 2015 9:57 AM
To: Ted Yu; Andy Huang
Cc: user@spark.apache.org
Subject: RE: No space left on device when running graphx job
Also, please see the screenshot below from spark web