Re: java.lang.OutOfMemoryError Spark Worker

2020-05-12 Thread Hrishikesh Mishra
o >> /grid/1/spark/work/driver-20200508153502-1291/stdout closed: Stream closed >> >> 20/05/08 15:36:55 INFO ExternalShuffleBlockResolver: Application >> app-20200508153654-11776 removed, cleanupLocalDirs = true >> >> 20/05/08 *15:36:55* INFO Worker: Drive

Re: java.lang.OutOfMemoryError Spark Worker

2020-05-08 Thread Russell Spitzer
:36:55* INFO Worker: Driver* driver-20200508153502-1291 was > killed by user* > > *20/05/08 15:43:06 WARN AbstractChannelHandlerContext: An exception > 'java.lang.OutOfMemoryError: Java heap space' [enable DEBUG level for full > stacktrace] was thrown by a user handler's exceptionCaught()

Re: java.lang.OutOfMemoryError Spark Worker

2020-05-08 Thread Hrishikesh Mishra
: An exception 'java.lang.OutOfMemoryError: Java heap space' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:* *java.lang.OutOfMemoryError: Java heap space* *20/05/08 15:43:23 ERROR SparkUncaughtExceptionHandler

Re: java.lang.OutOfMemoryError Spark Worker

2020-05-08 Thread Jacek Laskowski
Worker: Driver driver-20200508153502-1291 was > killed by user > > *20/05/08 15:43:06 WARN AbstractChannelHandlerContext: An exception > 'java.lang.OutOfMemoryError: Java heap space' [enable DEBUG level for full > stacktrace] was thrown by a user handler's exceptionCaught() method while >

Re: java.lang.OutOfMemoryError Spark Worker

2020-05-08 Thread Hrishikesh Mishra
driver-20200508153502-1291 was killed by user *20/05/08 15:43:06 WARN AbstractChannelHandlerContext: An exception 'java.lang.OutOfMemoryError: Java heap space' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception

Re: java.lang.OutOfMemoryError Spark Worker

2020-05-08 Thread Jacek Laskowski
ions: Set(root); groups > with view permissions: Set(); users with modify permissions: Set(root); > groups with modify permissions: Set() > > 20/05/06 12:53:03 ERROR SparkUncaughtExceptionHandler: Uncaught exception > in thread Thread[ExecutorRunner for app-202005061247

Re: java.lang.OutOfMemoryError Spark Worker

2020-05-08 Thread Hrishikesh Mishra
= true 20/05/08 15:36:55 INFO Worker: Driver driver-20200508153502-1291 was killed by user *20/05/08 15:43:06 WARN AbstractChannelHandlerContext: An exception 'java.lang.OutOfMemoryError: Java heap space' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method

Re: java.lang.OutOfMemoryError Spark Worker

2020-05-07 Thread Hrishikesh Mishra
-9846/3,5,main]* *java.lang.OutOfMemoryError: Java heap space* * at org.apache.xerces.xni.XMLString.toString(Unknown Source)* at org.apache.xerces.parsers.AbstractDOMParser.characters(Unknown Source) at org.apache.xerces.xinclude.XIncludeHandler.characters(Unknown Source

Re: java.lang.OutOfMemoryError Spark Worker

2020-05-07 Thread Jeff Evans
t; in thread Thread[ExecutorRunner for app-20200506124717-10226/0,5,main] > > java.lang.OutOfMemoryError: Java heap space > > at org.apache.xerces.util.XMLStringBuffer.append(Unknown Source) > > at org.apache.xerces.impl.XMLEntityScanner.sc

java.lang.OutOfMemoryError Spark Worker

2020-05-07 Thread Hrishikesh Mishra
] java.lang.OutOfMemoryError: Java heap space at org.apache.xerces.util.XMLStringBuffer.append(Unknown Source) at org.apache.xerces.impl.XMLEntityScanner.scanData(Unknown Source) at org.apache.xerces.impl.XMLScanner.scanComment(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanComment

java.lang.OutOfMemoryError: Java heap space - Spark driver.

2018-08-29 Thread Guillermo Ortiz Fernández
n in finally: Java heap space java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_162] at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_162] at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$4.apply(TorrentBroadca

java.lang.OutOfMemoryError

2017-05-19 Thread Kürşat Kurt
Hi; I am trying multiclass text classification with Randomforest Classifier on my local computer(16 GB RAM, 4 physical cores ). When i run with the parameters below, i am getting "java.lang.OutOfMemoryError: GC overhead limit exceeded" error. spark-submit --driver-memory 1G --dri

Re: Spark Mlib - java.lang.OutOfMemoryError: Java heap space

2017-04-24 Thread Selvam Raman
10:09:26 INFO BlockManagerInfo: Removed taskresult_362 on ip-...-45.dev:40963 in memory (size: 5.2 MB, free: 8.9 GB) 17/04/24 10:09:26 INFO TaskSetManager: Finished task 125.0 in stage 1.0 (TID 359) in 4383 ms on ip-...-45.dev (125/234) # # java.lang.OutOfMemoryError: Java heap space

Spark Mlib - java.lang.OutOfMemoryError: Java heap space

2017-04-24 Thread Selvam Raman
Hi, I have 1 master and 4 slave node. Input data size is 14GB. Slave Node config : 32GB Ram,16 core I am trying to train word embedding model using spark. It is going out of memory. To train 14GB of data how much memory do i require?. I have givem 20gb per executor but below shows it is using

Re: getting error on spark streaming : java.lang.OutOfMemoryError: unable to create new native thread

2016-11-22 Thread Shixiong(Ryan) Zhu
ngested is only 506kb. > > > *16/11/23 03:05:54 INFO MappedDStream: Slicing from 1479850537180 ms to > 1479850537235 ms (aligned to 1479850537180 ms and 1479850537235 ms)* > > *Exception in thread "streaming-job-executor-0" > java.lang.OutOfMemoryError: unable to create n

getting error on spark streaming : java.lang.OutOfMemoryError: unable to create new native thread

2016-11-22 Thread Mohit Durgapal
in thread "streaming-job-executor-0" java.lang.OutOfMemoryError: unable to create new native thread* I looked it up and found out that it could be related to ulimit, I even increased the ulimit to 1 but still the same error. Regards Mohit

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-11-01 Thread kant kodali
Here is a UI of my thread dump. http://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMTYvMTEvMS8tLWpzdGFja19kdW1wX3dpbmRvd19pbnRlcnZhbF8xbWluX2JhdGNoX2ludGVydmFsXzFzLnR4dC0tNi0xNy00Ng== On Mon, Oct 31, 2016 at 10:32 PM, kant kodali wrote: > Hi Vadim, > > Thank you so

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-31 Thread kant kodali
Hi Vadim, Thank you so much this was a very useful command. This conversation is going on here https://www.mail-archive.com/user@spark.apache.org/msg58656.html or you can just google " why spark driver program is creating so many threads? How can I limit this number?

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-31 Thread Vadim Semenov
Have you tried to get number of threads in a running process using `cat /proc//status` ? On Sun, Oct 30, 2016 at 11:04 PM, kant kodali wrote: > yes I did run ps -ef | grep "app_name" and it is root. > > > > On Sun, Oct 30, 2016 at 8:00 PM, Chan Chor Pang

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread kant kodali
yes I did run ps -ef | grep "app_name" and it is root. On Sun, Oct 30, 2016 at 8:00 PM, Chan Chor Pang wrote: > sorry, the UID > > On 10/31/16 11:59 AM, Chan Chor Pang wrote: > > actually if the max user processes is not the problem, i have no idea > > but i still

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread Chan Chor Pang
sorry, the UID On 10/31/16 11:59 AM, Chan Chor Pang wrote: actually if the max user processes is not the problem, i have no idea but i still suspecting the user, as the user who run spark-submit is not necessary the pid for the JVM process can u make sure when you "ps -ef | grep {your app

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread Chan Chor Pang
actually if the max user processes is not the problem, i have no idea but i still suspecting the user, as the user who run spark-submit is not necessary the pid for the JVM process can u make sure when you "ps -ef | grep {your app id} " the PID is root? On 10/31/16 11:21 AM, kant kodali

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread kant kodali
The java process is run by the root and it has the same config sudo -i ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i)

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread Chan Chor Pang
I have the same Exception before and the problem fix after i change the nproc conf. > max user processes (-u) 120242 ↑this config does looks good. are u sure the user who run ulimit -a is the same user who run the Java process? depend on how u submit the job and your setting,

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread kant kodali
when I did this cat /proc/sys/kernel/pid_max I got 32768 On Sun, Oct 30, 2016 at 6:36 PM, kant kodali wrote: > I believe for ubuntu it is unlimited but I am not 100% sure (I just read > somewhere online). I ran ulimit -a and this is what I get > > core file size

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread kant kodali
I believe for ubuntu it is unlimited but I am not 100% sure (I just read somewhere online). I ran ulimit -a and this is what I get core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f)

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread Chan Chor Pang
not sure for ubuntu, but i think you can just create the file by yourself the syntax will be the same as /etc/security/limits.conf nproc.conf not only limit java process but all process by the same user so even the jvm process does nothing, if the corresponding user is busy in other way the

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread kant kodali
On Sun, Oct 30, 2016 at 5:22 PM, Chan Chor Pang wrote: > /etc/security/limits.d/90-nproc.conf > Hi, I am using Ubuntu 16.04 LTS. I have this directory /etc/security/limits.d/ but I don't have any files underneath it. This error happens after running for 4 to 5 hours. I

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread Chan Chor Pang
you may want to check the process limit of the user who responsible for starting the JVM. /etc/security/limits.d/90-nproc.conf On 10/29/16 4:47 AM, kant kodali wrote: "dag-scheduler-event-loop" java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thr

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-29 Thread kant kodali
ler-event-loop" java.lang.OutOfMemoryError: unable to create > new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > at scala.concurrent.forkjoin.ForkJoinPool.tryAddW

java.lang.OutOfMemoryError: unable to create new native thread

2016-10-28 Thread kant kodali
"dag-scheduler-event-loop" java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at scala.concurrent.forkjoin.ForkJoinPool.tryAddWorker( ForkJoinPool

Spark Sql - "broadcast-exchange-1" java.lang.OutOfMemoryError: Java heap space

2016-10-25 Thread Selvam Raman
Hi, Need a help to figure out and solve heap space problem. I have query which contains 15+ table and when i trying to print out the result(Just 23 rows) it throws heap space error. Following command i tried in standalone mode: (My mac machine having 8 core and 15GB ram)

java.lang.OutOfMemoryError Spark MLlib ALS matrix factorization

2016-09-01 Thread ANDREA SPINA
, **stage 12.0 is the first Iteration stage* *). After 4 retries, the Job indeed failes and get aborted* 16/08/31 23:53:03 WARN TaskSetManager: Lost task 12.0 in stage 2.0 (TID 3312, cloud-15): java.lang.OutOfMemoryError: GC overhead limit exceeded at java.lang.Integer.valueOf(Integer.java:832

Re: java.lang.OutOfMemoryError: GC overhead limit exceeded when using UDFs in SparkSQL (Spark 2.0.0)

2016-08-09 Thread Zoltan Fedor
; >> > > >> > There is a big table (5.6 Billion rows, 450Gb in memory) loaded into > 300 > >> > executors's memory in SparkSQL, on which we would do some calculation > >> > using > >> > UDFs in pyspark. > >> > If I run my SQL on on

Re: java.lang.OutOfMemoryError: GC overhead limit exceeded when using UDFs in SparkSQL (Spark 2.0.0)

2016-08-09 Thread Davies Liu
n >> > using >> > UDFs in pyspark. >> > If I run my SQL on only a portion of the data (filtering by one of the >> > attributes), let's say 800 million records, then all works well. But >> > when I >> > run the same SQL on all the data, then I receive &g

Re: java.lang.OutOfMemoryError: GC overhead limit exceeded when using UDFs in SparkSQL (Spark 2.0.0)

2016-08-09 Thread Zoltan Fedor
the data (filtering by one of the > > attributes), let's say 800 million records, then all works well. But > when I > > run the same SQL on all the data, then I receive > > "java.lang.OutOfMemoryError: GC overhead limit exceeded" from basically > all > > o

Re: java.lang.OutOfMemoryError: GC overhead limit exceeded when using UDFs in SparkSQL (Spark 2.0.0)

2016-08-08 Thread Davies Liu
cutors's memory in SparkSQL, on which we would do some calculation using > UDFs in pyspark. > If I run my SQL on only a portion of the data (filtering by one of the > attributes), let's say 800 million records, then all works well. But when I > run the same SQL on all the data, then I re

java.lang.OutOfMemoryError: GC overhead limit exceeded when using UDFs in SparkSQL (Spark 2.0.0)

2016-08-08 Thread Zoltan Fedor
a portion of the data (filtering by one of the attributes), let's say 800 million records, then all works well. But when I run the same SQL on all the data, then I receive "*java.lang.OutOfMemoryError: GC overhead limit exceeded"* from basically all of the executors. It seems to me that py

Re: Spark jobs failing due to java.lang.OutOfMemoryError: PermGen space

2016-08-04 Thread Deepak Sharma
on: Main class [org.apache.oozie.action.hadoop.SparkMain], >>> main() threw exception, PermGen space >>> 2016-08-03 22:33:43,319 WARN SparkActionExecutor:523 - >>> SERVER[ip-10-0-0-161.ec2.internal] USER[hadoop] GROUP[-] TOKEN[] >>> APP[ApprouteOozie] JOB[031-160

Re: Spark jobs failing due to java.lang.OutOfMemoryError: PermGen space

2016-08-04 Thread $iddhe$h Divekar
6-08-03 22:33:43,319 WARN SparkActionExecutor:523 - >> SERVER[ip-10-0-0-161.ec2.internal] USER[hadoop] GROUP[-] TOKEN[] >> APP[ApprouteOozie] JOB[031-160803180548580-oozie-oozi-W] >> ACTION[031-160803180548580-oozie-oozi-W@spark-approu

Re: Spark jobs failing due to java.lang.OutOfMemoryError: PermGen space

2016-08-04 Thread Deepak Sharma
60803180548580-oozie-oozi-W] > ACTION[031-160803180548580-oozie-oozi-W@spark-approute] Launcher > exception: PermGen space > java.lang.OutOfMemoryError: PermGen space > > oozie-oozi-W@spark-approute] Launcher exception: PermGen space > java.lang.OutOfMemoryError: PermGe

Spark jobs failing due to java.lang.OutOfMemoryError: PermGen space

2016-08-04 Thread $iddhe$h Divekar
[] APP[ApprouteOozie] JOB[031-160803180548580-oozie-oozi-W] ACTION[031-160803180548580-oozie-oozi-W@spark-approute] Launcher exception: PermGen space java.lang.OutOfMemoryError: PermGen space oozie-oozi-W@spark-approute] Launcher exception: PermGen space java.lang.OutOfMemory

Re: Exception in thread "dispatcher-event-loop-1" java.lang.OutOfMemoryError: Java heap space

2016-07-22 Thread Andy Davidson
ead "dispatcher-event-loop-1" java.lang.OutOfMemoryError: Java heap space > How much heap memory do you give the driver ? > > On Fri, Jul 22, 2016 at 2:17 PM, Andy Davidson <a...@santacruzintegration.com> > wrote: >> Given I get a stack trace in my python notebook I am

Re: Exception in thread "dispatcher-event-loop-1" java.lang.OutOfMemoryError: Java heap space

2016-07-22 Thread Ted Yu
TaskSetManager: Stage 146 contains a task of very > large size (145 KB). The maximum recommended task size is 100 KB. > > 16/07/22 18:39:47 WARN HeartbeatReceiver: Removing executor 2 with no > recent heartbeats: 153037 ms exceeds timeout 12 ms > > Excepti

Exception in thread "dispatcher-event-loop-1" java.lang.OutOfMemoryError: Java heap space

2016-07-22 Thread Andy Davidson
skSetManager: Stage 146 contains a task of very large size (145 KB). The maximum recommended task size is 100 KB. 16/07/22 18:39:47 WARN HeartbeatReceiver: Removing executor 2 with no recent heartbeats: 153037 ms exceeds timeout 12 ms Exception in thread "dispatcher-event-loop-1"

Re: java.lang.OutOfMemoryError related to Graphframe bfs

2016-07-15 Thread RK Aduri
Did you try with different driver's memory? Increasing driver's memory can be one option. Can you print the GC and post the GC times? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-OutOfMemoryError-related-to-Graphframe-bfs-tp27318p27347.html

Re: Memory issue java.lang.OutOfMemoryError: Java heap space

2016-07-13 Thread Chanh Le
GB Ubuntu server... >>>>> >>>>> I have changed things in the conf file, but it looks like Spark does not >>>>> care, so I wonder if my issues are with the driver or executor. >>>>> >>>>> I set: >>>

Re: Memory issue java.lang.OutOfMemoryError: Java heap space

2016-07-13 Thread Jean Georges Perrin
ues are with the driver or executor. >>>> >>>> I set: >>>> >>>> spark.driver.memory 20g >>>> spark.executor.memory 20g >>>> And, whatever I do, the crash is always at the same spot in the app, which >>>

Re: Memory issue java.lang.OutOfMemoryError: Java heap space

2016-07-13 Thread Chanh Le
: >>> >>> spark.driver.memory 20g >>> spark.executor.memory 20g >>> And, whatever I do, the crash is always at the same spot in the app, which >>> makes me think that it is a driver pro

Re: Memory issue java.lang.OutOfMemoryError: Java heap space

2016-07-13 Thread Jean Georges Perrin
ame spot in the app, which >> makes me think that it is a driver problem. >> >> The exception I get is: >> >> 16/07/13 20:36:30 WARN TaskSetManager: Lost task 0.0 in stage 7.0 (TID 208, >> micha.nc.rr.com): java.lang.OutOfMemoryError: Java heap space >> at java.nio.H

Re: Memory issue java.lang.OutOfMemoryError: Java heap space

2016-07-13 Thread Jean Georges Perrin
gt; > The exception I get is: > > 16/07/13 20:36:30 WARN TaskSetManager: Lost task 0.0 in stage 7.0 (TID 208, > micha.nc.rr.com): java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapCharBuffer.(HeapCharBuffer.java:57) > at java.n

Memory issue java.lang.OutOfMemoryError: Java heap space

2016-07-13 Thread Jean Georges Perrin
): java.lang.OutOfMemoryError: Java heap space at java.nio.HeapCharBuffer.(HeapCharBuffer.java:57) at java.nio.CharBuffer.allocate(CharBuffer.java:335) at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:810) at org.apache.hadoop.io.Text.decode(Text.java:412

SparkDriver throwing java.lang.OutOfMemoryError: Java heap space

2016-04-04 Thread Nirav Patel
ActorSystem [sparkDriver] java.lang.OutOfMemoryError: Java heap space at com.google.protobuf.AbstractMessageLite.toByteArray(AbstractMessageLite.java:62) at akka.remote.transport.AkkaPduProtobufCodec$.constructMessage(AkkaPduCodec.scala:138) at akka.remote.EndpointWriter.writeSend(Endpoint.scala:740

java.lang.OutOfMemoryError: Direct buffer memory when using broadcast join

2016-03-21 Thread Dai, Kevin
Hi, All I'm joining a small table (about 200m) with a huge table using broadcast join, however, spark throw the exception as follows: 16/03/20 22:32:06 WARN TransportChannelHandler: Exception in connection from java.lang.OutOfMemoryError: Direct buffer memory

Re: java.lang.OutOfMemoryError: Requested array size exceeds VM limit

2016-03-14 Thread nir
For uniform partitioning, you can try custom Partitioner. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-OutOfMemoryError-Requested-array-size-exceeds-VM-limit-tp16809p26477.html Sent from the Apache Spark User List mailing list archive at

Re: [Spark 1.5]: Exception in thread "broadcast-hash-join-2" java.lang.OutOfMemoryError: Java heap space -- Work in 1.4, but 1.5 doesn't

2015-12-15 Thread Deenar Toraskar
t; >> >> *From:* Shuai Zheng [mailto:szheng.c...@gmail.com] >> *Sent:* Wednesday, November 04, 2015 3:22 PM >> *To:* user@spark.apache.org >> *Subject:* [Spark 1.5]: Exception in thread "broadcast-hash-join-2" >> java.lang.OutOfMemoryError: Java heap

Re: [Spark 1.5]: Exception in thread "broadcast-hash-join-2" java.lang.OutOfMemoryError: Java heap space -- Work in 1.4, but 1.5 doesn't

2015-12-15 Thread Deenar Toraskar
s proven >>> that there is no issue on the logic and data, it is caused by the new >>> version of Spark. >>> >>> >>> >>> So I want to know any new setup I should set in Spark 1.5 to make it >>> work? >>> >>> >>> >>> R

Re: df.partitionBy().parquet() java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-12-02 Thread Cheng Lian
he partitioned dataset successfully. I can see the output in HDFS once all Spark tasks are done. After the spark tasks are done, the job appears to be running for over an hour, until I get the following (full stack trace below): java.lang.OutOfMemoryError: GC o

Re: df.partitionBy().parquet() java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-12-02 Thread Adrien Mogenet
I can see the output in HDFS once all Spark tasks >> are done. >> >> After the spark tasks are done, the job appears to be running for over an >> hour, until I get the following (full stack trace below): >> >> java.lang.OutOfMemoryError: GC overhead limit exc

df.partitionBy().parquet() java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-11-28 Thread Don Drake
an hour, until I get the following (full stack trace below): java.lang.OutOfMemoryError: GC overhead limit exceeded at org.apache.parquet.format.converter.ParquetMetadataConverter.toParquetStatistics(ParquetMetadataConverter.java:238) I had set the driver memory to be 20GB. I attempted

newbie simple app, small data set: Py4JJavaError java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-11-18 Thread Andy Davidson
'An error occurred while calling {0}{1}{2}.\n'. --> 300 format(target_id, '.', name), value) 301 else: 302 raise Py4JError( Py4JJavaError: An error occurred while calling o65.partitions. : java.lang.OutOfMemoryError: GC overhead limit exceed

RE: [sparkR] Any insight on java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-11-07 Thread Sun, Rui
val Patel [mailto:dhaval1...@gmail.com] Sent: Saturday, November 7, 2015 12:26 AM To: Spark User Group Subject: [sparkR] Any insight on java.lang.OutOfMemoryError: GC overhead limit exceeded I have been struggling through this error since past 3 days and have tried all possible ways/suggest

[sparkR] Any insight on java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-11-06 Thread Dhaval Patel
cast_2_piece0 on localhost:39562 in memory (size: 2.4 KB, free: 530.0 MB) 15/11/06 10:45:20 INFO ContextCleaner: Cleaned accumulator 2 15/11/06 10:45:53 WARN ServletHandler: Error for /static/timeline-view.css java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.zip.Zip

RE: [Spark 1.5]: Exception in thread "broadcast-hash-join-2" java.lang.OutOfMemoryError: Java heap space -- Work in 1.4, but 1.5 doesn't

2015-11-04 Thread Shuai Zheng
oin-2" java.lang.OutOfMemoryError: Java heap space Hi All, I have a program which actually run a bit complex business (join) in spark. And I have below exception: I running on Spark 1.5, and with parameter: spark-submit --deploy-mode client --executor-cores=24 --driver-memory=2G

[Spark 1.5]: Exception in thread "broadcast-hash-join-2" java.lang.OutOfMemoryError: Java heap space

2015-11-04 Thread Shuai Zheng
uot;).set("spark.sql.autoBroadcastJoinThreshold", "104857600"); This is running on AWS c3*8xlarge instance. I am not sure what kind of parameter I should set if I have below OutOfMemoryError exception. # # java.lang.OutOfMemoryError: Java heap space # -XX:OnOutOfMemoryError="kill -9

java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-10-04 Thread t_ras
I get java.lang.OutOfMemoryError: GC overhead limit exceeded when trying coutn action on a file. The file is a CSV file 217GB zise Im using a 10 r3.8xlarge(ubuntu) machines cdh 5.3.6 and spark 1.2.0 configutation: spark.app.id:local-1443956477103 spark.app.name:Spark shell spark.cores.max

Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-10-04 Thread Ted Yu
1.2.0 is quite old. You may want to try 1.5.1 which was released in the past week. Cheers > On Oct 4, 2015, at 4:26 AM, t_ras <marti...@netvision.net.il> wrote: > > I get java.lang.OutOfMemoryError: GC overhead limit exceeded when trying > coutn action on a file. > >

Re: Spark 1.5.0 java.lang.OutOfMemoryError: PermGen space

2015-09-12 Thread Jagat Singh
Sorry to answer your question fully. The job starts tasks and few of them fail and some are successful. The failed one have that PermGen error in logs. But ultimately full job is marked fail and session quits. On Sun, Sep 13, 2015 at 10:48 AM, Jagat Singh wrote: > Hi

Re: Spark 1.5.0 java.lang.OutOfMemoryError: PermGen space

2015-09-12 Thread Jagat Singh
Hi Davies, This was first query on new version. The one which ran successfully was Spark Pi example ./bin/spark-submit --class org.apache.spark.examples.SparkPi \ --master yarn-client \ --num-executors 3 \ --driver-memory 4g \ --executor-memory 2g \ --executor-cores 1 \

Re: Spark 1.5.0 java.lang.OutOfMemoryError: PermGen space

2015-09-11 Thread Davies Liu
Did this happen immediately after you start the cluster or after ran some queries? Is this in local mode or cluster mode? On Fri, Sep 11, 2015 at 3:00 AM, Jagat Singh wrote: > Hi, > > We have queries which were running fine on 1.4.1 system. > > We are testing upgrade and

Spark 1.5.0 java.lang.OutOfMemoryError: PermGen space

2015-09-11 Thread Jagat Singh
Hi, We have queries which were running fine on 1.4.1 system. We are testing upgrade and even simple query like val t1= sqlContext.sql("select count(*) from table") t1.show This works perfectly fine on 1.4.1 but throws OOM error in 1.5.0 Are there any changes in default memory settings from

Re: Spark 1.5.0 java.lang.OutOfMemoryError: PermGen space

2015-09-11 Thread Ted Yu
Have you seen this thread ? http://search-hadoop.com/m/q3RTtPPuSvBu0rj2 > On Sep 11, 2015, at 3:00 AM, Jagat Singh wrote: > > Hi, > > We have queries which were running fine on 1.4.1 system. > > We are testing upgrade and even simple query like > val t1=

AW: Spark GraphX memory requirements + java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-08-11 Thread rene.pfitzner
: Samstag, 11. Juli 2015 03:58 An: Ted Yu; Robin East; user Betreff: Re: Spark GraphX memory requirements + java.lang.OutOfMemoryError: GC overhead limit exceeded Hello again. So I could compute triangle numbers when run the code from spark shell without workers (with --driver-memory 15g option

Re: Strange Error: java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-07-15 Thread Saeed Shahrivari
: 15/07/15 18:24:05 WARN scheduler.TaskSetManager: Lost task 267.0 in stage 0.0 (TID 267, psh-11.nse.ir): java.lang.OutOfMemoryError: GC overhead limit exceeded It seems that the map function keeps the hashDocs RDD in the memory and when the memory is filled in an executor, the application

Re: Strange Error: java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-07-15 Thread Ted Yu
the html that has the shortest URL. However, after running for 2-3 hours the application crashes due to memory issue. Here is the exception: 15/07/15 18:24:05 WARN scheduler.TaskSetManager: Lost task 267.0 in stage 0.0 (TID 267, psh-11.nse.ir): java.lang.OutOfMemoryError: GC overhead limit

Strange Error: java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-07-15 Thread Saeed Shahrivari
): java.lang.OutOfMemoryError: GC overhead limit exceeded It seems that the map function keeps the hashDocs RDD in the memory and when the memory is filled in an executor, the application crashes. Persisting the map output to disk solves the problem. Adding the following line between map and reduce solve

Re: Spark GraphX memory requirements + java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-07-10 Thread Roman Sokolov
: (0 + 8) / 32]15/07/11 01:48:45 WARN TaskSetManager: Lost task 2.0 in stage 7.0 (TID 130, 192.168.0.28): io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError at io.netty.handler.codec.ByteToMessageDecoder.channelRead

Re: java.lang.OutOfMemoryError: PermGen space

2015-07-07 Thread jitender
Stati, Change SPARK_REPL_OPTS to SPARK_SUBMIT_OPTS and try again. I faced the same issue and making this change worked for me. I looked at the spark-shell file under the bin dir and found SPARK_SUBMIT_OPTS being used. SPARK_SUBMIT_OPTS=-XX:MaxPermSize=256m bin/spark-shell --master

Re: Spark GraphX memory requirements + java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-06-26 Thread Roman Sokolov
Ok, but what does it means? I did not change the core files of spark, so is it a bug there? PS: on small datasets (500 Mb) I have no problem. Am 25.06.2015 18:02 schrieb Ted Yu yuzhih...@gmail.com: The assertion failure from TriangleCount.scala corresponds with the following lines:

Re: Spark GraphX memory requirements + java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-06-26 Thread Robin East
You’ll get this issue if you just take the first 2000 lines of that file. The problem is triangleCount() expects srdId dstId which is not the case in the file (e.g. vertex 28). You can get round this by calling graph.convertToCanonical Edges() which removes bi-directional edges and ensures

Re: Spark GraphX memory requirements + java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-06-26 Thread Roman Sokolov
Yep, I already found it. So I added 1 line: val graph = GraphLoader.edgeListFile(sc, , ...) val newgraph = graph.convertToCanonicalEdges() and could successfully count triangles on newgraph. Next will test it on bigger (several Gb) networks. I am using Spark 1.3 and 1.4 but haven't seen

Re: java.lang.OutOfMemoryError: PermGen space

2015-06-25 Thread Roberto Coluccio
=256m as spark-shell input argument? Roberto On Wed, Jun 24, 2015 at 5:57 PM, stati srikanth...@gmail.com wrote: Hello, I moved from 1.3.1 to 1.4.0 and started receiving java.lang.OutOfMemoryError: PermGen space when I use spark-shell. Same Scala code works fine in 1.3.1 spark-shell

Spark GraphX memory requirements + java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-06-25 Thread Roman Sokolov
Hello! I am trying to compute number of triangles with GraphX. But get memory error or heap size, even though the dataset is very small (1Gb). I run the code in spark-shell, having 16Gb RAM machine (also tried with 2 workers on separate machines 8Gb RAM each). So I have 15x more memory than the

Re: Spark GraphX memory requirements + java.lang.OutOfMemoryError: GC overhead limit exceeded

2015-06-25 Thread Ted Yu
The assertion failure from TriangleCount.scala corresponds with the following lines: g.outerJoinVertices(counters) { (vid, _, optCounter: Option[Int]) = val dblCount = optCounter.getOrElse(0) // double count should be even (divisible by two) assert((dblCount 1)

Re: java.lang.OutOfMemoryError: PermGen space

2015-06-24 Thread Srikanth
to pass it with --driver-java-options -XX:MaxPermSize=256m as spark-shell input argument? Roberto On Wed, Jun 24, 2015 at 5:57 PM, stati srikanth...@gmail.com wrote: Hello, I moved from 1.3.1 to 1.4.0 and started receiving java.lang.OutOfMemoryError: PermGen space when I use spark

java.lang.OutOfMemoryError: PermGen space

2015-06-24 Thread stati
Hello, I moved from 1.3.1 to 1.4.0 and started receiving java.lang.OutOfMemoryError: PermGen space when I use spark-shell. Same Scala code works fine in 1.3.1 spark-shell. I was loading same set of external JARs and have same imports in 1.3.1. I tried increasing perm size to 256m. I still got

Re: java.lang.OutOfMemoryError: PermGen space

2015-06-24 Thread Roberto Coluccio
Did you try to pass it with --driver-java-options -XX:MaxPermSize=256m as spark-shell input argument? Roberto On Wed, Jun 24, 2015 at 5:57 PM, stati srikanth...@gmail.com wrote: Hello, I moved from 1.3.1 to 1.4.0 and started receiving java.lang.OutOfMemoryError: PermGen space when I

Re: spark-sql from CLI ---EXCEPTION: java.lang.OutOfMemoryError: Java heap space

2015-06-17 Thread Sanjay Subramanian
@spark.apache.org Sent: Thursday, June 11, 2015 8:43 AM Subject: spark-sql from CLI ---EXCEPTION: java.lang.OutOfMemoryError: Java heap space hey guys Using Hive and Impala daily intensively.Want to transition to spark-sql in CLI mode Currently in my sandbox I am using the Spark (standalone mode

Re: spark-sql from CLI ---EXCEPTION: java.lang.OutOfMemoryError: Java heap space

2015-06-16 Thread Sanjay Subramanian
best regards sanjay From: Josh Rosen rosenvi...@gmail.com To: Sanjay Subramanian sanjaysubraman...@yahoo.com Cc: user@spark.apache.org user@spark.apache.org Sent: Friday, June 12, 2015 7:15 AM Subject: Re: spark-sql from CLI ---EXCEPTION: java.lang.OutOfMemoryError: Java heap

Re: Spark job throwing “java.lang.OutOfMemoryError: GC overhead limit exceeded”

2015-06-15 Thread Deng Ching-Mallete
java.lang.OutOfMemoryError: GC overhead limit exceeded. The job is trying to process a filesize 4.5G. I've tried following spark configuration: --num-executors 6 --executor-memory 6G --executor-cores 6 --driver-memory 3G I tried increasing more cores and executors which sometime works

Spark job throwing “java.lang.OutOfMemoryError: GC overhead limit exceeded”

2015-06-15 Thread diplomatic Guru
Hello All, I have a Spark job that throws java.lang.OutOfMemoryError: GC overhead limit exceeded. The job is trying to process a filesize 4.5G. I've tried following spark configuration: --num-executors 6 --executor-memory 6G --executor-cores 6 --driver-memory 3G I tried increasing more

Re: spark-sql from CLI ---EXCEPTION: java.lang.OutOfMemoryError: Java heap space

2015-06-13 Thread Sanjay Subramanian
---EXCEPTION: java.lang.OutOfMemoryError: Java heap space It sounds like this might be caused by a memory configuration problem.  In addition to looking at the executor memory, I'd also bump up the driver memory, since it appears that your shell is running out of memory when collecting a large query

Re: spark-sql from CLI ---EXCEPTION: java.lang.OutOfMemoryError: Java heap space

2015-06-13 Thread Josh Rosen
-- *From:* Josh Rosen rosenvi...@gmail.com *To:* Sanjay Subramanian sanjaysubraman...@yahoo.com *Cc:* user@spark.apache.org user@spark.apache.org *Sent:* Friday, June 12, 2015 7:15 AM *Subject:* Re: spark-sql from CLI ---EXCEPTION: java.lang.OutOfMemoryError: Java

Re: spark-sql from CLI ---EXCEPTION: java.lang.OutOfMemoryError: Java heap space

2015-06-12 Thread Josh Rosen
while handling an exception event ([id: 0x01b99855, /10.0.0.19:58117 = /10.0.0.19:52016] EXCEPTION: java.lang.OutOfMemoryError: Java heap space) java.lang.OutOfMemoryError: Java heap space at org.jboss.netty.buffer.HeapChannelBuffer.init(HeapChannelBuffer.java:42

Re: spark-sql from CLI ---EXCEPTION: java.lang.OutOfMemoryError: Java heap space

2015-06-12 Thread Josh Rosen
: 0x01b99855, /10.0.0.19:58117 = /10.0.0.19:52016] EXCEPTION: java.lang.OutOfMemoryError: Java heap space) java.lang.OutOfMemoryError: Java heap space at org.jboss.netty.buffer.HeapChannelBuffer.init(HeapChannelBuffer.java:42) at org.jboss.netty.buffer.BigEndianHeapChannelBuffer.init

spark-sql from CLI ---EXCEPTION: java.lang.OutOfMemoryError: Java heap space

2015-06-11 Thread Sanjay Subramanian
handler while handling an exception event ([id: 0x01b99855, /10.0.0.19:58117 = /10.0.0.19:52016] EXCEPTION: java.lang.OutOfMemoryError: Java heap space)java.lang.OutOfMemoryError: Java heap space        at org.jboss.netty.buffer.HeapChannelBuffer.init(HeapChannelBuffer.java:42

MLLib SVMWithSGD : java.lang.OutOfMemoryError: Java heap space

2015-04-16 Thread sarath
Hi, I'm trying to train an SVM on KDD2010 dataset (available from libsvm). But I'm getting java.lang.OutOfMemoryError: Java heap space error. The dataset is really sparse and have around 8 million data points and 20 million features. I'm using a cluster of 8 nodes (each with 8 cores and 64G RAM

Re: MLLib SVMWithSGD : java.lang.OutOfMemoryError: Java heap space

2015-04-16 Thread Akhil Das
Try increasing your driver memory. Thanks Best Regards On Thu, Apr 16, 2015 at 6:09 PM, sarath sarathkrishn...@gmail.com wrote: Hi, I'm trying to train an SVM on KDD2010 dataset (available from libsvm). But I'm getting java.lang.OutOfMemoryError: Java heap space error. The dataset

Re: java.lang.OutOfMemoryError: unable to create new native thread

2015-03-25 Thread ๏̯͡๏
am seeing various crashes in spark on large jobs which all share a similar exception: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) I increased nproc (i.e. ulimit -u) 10 fold, but it doesn't

Re: java.lang.OutOfMemoryError: unable to create new native thread

2015-03-25 Thread Matt Silvey
share a similar exception: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) I increased nproc (i.e. ulimit -u) 10 fold, but it doesn't help. Does anyone know how to avoid those kinds of errors

  1   2   >