Are you getting OutOfMemory on the driver or on the executor? Typical cause
of OOM in Spark can be due to fewer number of tasks for a job.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-OutOfMemory-Error-in-local-mode-tp29081p29117.html
Sent
t; Am 22.08.2017 um 20:16 schrieb shitijkuls :
>>
>> Any help here will be appreciated.
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-Ou
ce etc
Mit freundlichen Grüßen / best regards
Kay-Uwe Moosheimer
> Am 22.08.2017 um 20:16 schrieb shitijkuls :
>
> Any help here will be appreciated.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-OutOfMem
Any help here will be appreciated.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-OutOfMemory-Error-in-local-mode-tp29081p29096.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Thanks Zoltan.
So far I got to a full repro which works both in docker and on a bigger
real-world cluster. Also, the whole thing only happens in `cluster` mode.
I issued a ticket for it.
Any thoughts?
https://issues.apache.org/jira/browse/SPARK-10487
On Mon, Sep 7, 2015 at 7:59 PM, Zsolt Tóth
Hi,
I ran your example on Spark-1.4.1 and 1.5.0-rc3. It succeeds on 1.4.1 but
throws the OOM on 1.5.0. Do any of you know which PR introduced this
issue?
Zsolt
2015-09-07 16:33 GMT+02:00 Zoltán Zvara :
> Hey, I'd try to debug, profile ResolvedDataSource. As far as I know, your
> write will b
Hey, I'd try to debug, profile ResolvedDataSource. As far as I know, your
write will be performed by the JVM.
On Mon, Sep 7, 2015 at 4:11 PM Tóth Zoltán wrote:
> Unfortunately I'm getting the same error:
> The other interesting things are that:
> - the parquet files got actually written to HDFS
Unfortunately I'm getting the same error:
The other interesting things are that:
- the parquet files got actually written to HDFS (also with
.write.parquet() )
- the application gets stuck in the RUNNING state for good even after the
error is thrown
15/09/07 10:01:10 INFO spark.ContextCleaner: C
Hi,
Can you try to using save method instead of write?
ex: out_df.save("path","parquet")
b0c1
--
Skype: boci13, Hangout: boci.b...@gmail.com
On Mon, Sep 7, 2015 at 3:
Aaand, the error! :)
Exception in thread "org.apache.hadoop.hdfs.PeerCache@4e000abf"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread
"org.apache.hadoop.hdfs.PeerCache@4e000abf"
Exception in thread "Thread-7"
Exception: java.lang.OutOfMemoryError thrown from
Hi,
When I execute the Spark ML Logisitc Regression example in pyspark I run
into an OutOfMemory exception. I'm wondering if any of you experienced the
same or has a hint about how to fix this.
The interesting bit is that I only get the exception when I try to write
the result DataFrame into a fi
> > *at
>>> >
>>> org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:236)*
>>> > * at
>>> >
>>> org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readObject$1.apply$mcV
bjectInputStream.readSerialData(ObjectInputStream.java:1866)*
>> > *at
>> >
>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)*
>> > *at
>> > java.io.ObjectInputStream.readObject0(ObjectIn
gt; *at
> > java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1888)*
> > * at
> >
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)*
> > *at
> > java.io.ObjectInputStream.readObject0(
1771)*
> *at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)*
> *15/04/21 15:51:23 ERROR ExecutorUncaughtExceptionHandler: Uncaught
> exception in thread Thread[Executor task launch worker-1,5,main]*
>
>
>
> On Wed, Apr 22, 2015 at 1:32 AM, Olivier Girardot
<
> sourav.chan...@livestream.com> a écrit :
>
>> Hi,
>>
>> We are building a spark streaming application which reads from kafka,
>> does updateStateBykey based on the received message type and finally stores
>> into redis.
>>
>> After running for
stores into
> redis.
>
> After running for few seconds the executor process get killed by throwing
> OutOfMemory error.
>
> The code snippet is below:
>
>
> *NoOfReceiverInstances = 1*
>
> *val kafkaStreams = (1 to NoOfReceiverInstances).map(*
> * _ => K
Hi,
We are building a spark streaming application which reads from kafka, does
updateStateBykey based on the received message type and finally stores into
redis.
After running for few seconds the executor process get killed by throwing
OutOfMemory error.
The code snippet is below
ith incremental data in Avro
> 3. doing timestamp based duplicate removal (including partitioning in
> reduceByKey) and
> 4. joining a couple of MySQL tables using JdbcRdd
>
> Of late, we are seeing major instabilities where the app crashes on a lost
> executor which itself f
bilities where the app crashes on a lost
executor which itself failed due to a OutOfMemory error as below. This looks
almost identical to https://issues.apache.org/jira/browse/SPARK-4885 even
though we are seeing this error in Spark 1.1
2015-01-15 20:12:51,653 [handle-message-exec
/configuration.html
Thanks
Jerry
From: MEETHU MATHEW [mailto:meethu2...@yahoo.co.in]
Sent: Wednesday, August 20, 2014 4:48 PM
To: Akhil Das; Ghousia
Cc: user@spark.apache.org
Subject: Re: OutOfMemory Error
Hi ,
How to increase the heap size?
What is the difference between spark executor memory and heap
n Mon, Aug 18, 2014 at 10:40 AM, Ghousia Taj
>>wrote:
>>
>>Hi,
>>>
>>>I am trying to implement machine learning algorithms on Spark. I am working
>>>on a 3 node cluster, with each node having 5GB of memory. Whenever I am
>>>working with slightly
w huge value, resulting in OutOfMemory Error.
>
>
>
> On Mon, Aug 18, 2014 at 12:34 PM, Akhil Das
> wrote:
>
>> I believe spark.shuffle.memoryFraction is the one you are looking for.
>>
>> spark.shuffle.memoryFraction : Fraction of Java heap to use for
But this would be applicable only to operations that have a shuffle phase.
This might not be applicable to a simple Map operation where a record is
mapped to a new huge value, resulting in OutOfMemory Error.
On Mon, Aug 18, 2014 at 12:34 PM, Akhil Das
wrote:
> I beli
>>
>> Thanks
>> Best Regards
>>
>>
>> On Mon, Aug 18, 2014 at 10:40 AM, Ghousia Taj
>> wrote:
>>
>>> Hi,
>>>
>>> I am trying to implement machine learning algorithms on Spark. I am
>>> working
>>&
10:40 AM, Ghousia Taj
> wrote:
>
>> Hi,
>>
>> I am trying to implement machine learning algorithms on Spark. I am
>> working
>> on a 3 node cluster, with each node having 5GB of memory. Whenever I am
>> working with slightly more number of records, I end up wi
rking
> on a 3 node cluster, with each node having 5GB of memory. Whenever I am
> working with slightly more number of records, I end up with OutOfMemory
> Error. Problem is, even if number of records is slightly high, the
> intermediate result from a transformation is huge and this resu
Hi,
I am trying to implement machine learning algorithms on Spark. I am working
on a 3 node cluster, with each node having 5GB of memory. Whenever I am
working with slightly more number of records, I end up with OutOfMemory
Error. Problem is, even if number of records is slightly high, the
28 matches
Mail list logo