Yes, we have just modified the configuration, and every thing works fine.
Thanks very much for the help.
On Thu, Mar 19, 2015 at 5:24 PM, Ted Yu wrote:
> For YARN, possibly this one ?
>
>
> yarn.nodemanager.local-dirs
> /hadoop/yarn/local
>
>
> Cheers
>
> On Thu, Mar 19, 20
For YARN, possibly this one ?
yarn.nodemanager.local-dirs
/hadoop/yarn/local
Cheers
On Thu, Mar 19, 2015 at 2:21 PM, Marcelo Vanzin wrote:
> IIRC you have to set that configuration on the Worker processes (for
> standalone). The app can't override it (only for a client-mo
IIRC you have to set that configuration on the Worker processes (for
standalone). The app can't override it (only for a client-mode
driver). YARN has a similar configuration, but I don't know the name
(shouldn't be hard to find, though).
On Thu, Mar 19, 2015 at 11:56 AM, Davies Liu wrote:
> Is it
Is it possible that `spark.local.dir` is overriden by others? The docs say:
NOTE: In Spark 1.0 and later this will be overriden by
SPARK_LOCAL_DIRS (Standalone, Mesos) or LOCAL_DIRS (YARN)
On Sat, Mar 14, 2015 at 5:29 PM, Peng Xia wrote:
> Hi Sean,
>
> Thank very much for your reply.
> I tri
And I have 2 TB free space on C driver.
On Sat, Mar 14, 2015 at 8:29 PM, Peng Xia wrote:
> Hi Sean,
>
> Thank very much for your reply.
> I tried to config it from below code:
>
> sf = SparkConf().setAppName("test").set("spark.executor.memory",
> "45g").set("spark.cores.max", 62),set("spark.loc
Hi Sean,
Thank very much for your reply.
I tried to config it from below code:
sf = SparkConf().setAppName("test").set("spark.executor.memory",
"45g").set("spark.cores.max", 62),set("spark.local.dir", "C:\\tmp")
But still get the error.
Do you know how I can config this?
Thanks,
Best,
Peng
O
It means pretty much what it says. You ran out of space on an executor
(not driver), because the dir used for serialization temp files is
full (not all volumes). Set spark.local.dirs to something more
appropriate and larger.
On Sat, Mar 14, 2015 at 2:10 AM, Peng Xia wrote:
> Hi
>
>
> I was runnin
Hi
I was running a logistic regression algorithm on a 8 nodes spark cluster,
each node has 8 cores and 56 GB Ram (each node is running a windows
system). And the spark installation driver has 1.9 TB capacity. The dataset
I was training on are has around 40 million records with around 6600
feature