o
>> /grid/1/spark/work/driver-20200508153502-1291/stdout closed: Stream closed
>>
>> 20/05/08 15:36:55 INFO ExternalShuffleBlockResolver: Application
>> app-20200508153654-11776 removed, cleanupLocalDirs = true
>>
>> 20/05/08 *15:36:55* INFO Worker: Drive
:36:55* INFO Worker: Driver* driver-20200508153502-1291 was
> killed by user*
>
> *20/05/08 15:43:06 WARN AbstractChannelHandlerContext: An exception
> 'java.lang.OutOfMemoryError: Java heap space' [enable DEBUG level for full
> stacktrace] was thrown by a user handler's exceptionCaught()
: An exception
'java.lang.OutOfMemoryError: Java heap space' [enable DEBUG level for full
stacktrace] was thrown by a user handler's exceptionCaught() method while
handling the following exception:*
*java.lang.OutOfMemoryError: Java heap space*
*20/05/08 15:43:23 ERROR SparkUncaughtExceptionHandler
Worker: Driver driver-20200508153502-1291 was
> killed by user
>
> *20/05/08 15:43:06 WARN AbstractChannelHandlerContext: An exception
> 'java.lang.OutOfMemoryError: Java heap space' [enable DEBUG level for full
> stacktrace] was thrown by a user handler's exceptionCaught() method while
>
driver-20200508153502-1291 was killed
by user
*20/05/08 15:43:06 WARN AbstractChannelHandlerContext: An exception
'java.lang.OutOfMemoryError: Java heap space' [enable DEBUG level for full
stacktrace] was thrown by a user handler's exceptionCaught() method while
handling the following exception
ions: Set(root); groups
> with view permissions: Set(); users with modify permissions: Set(root);
> groups with modify permissions: Set()
>
> 20/05/06 12:53:03 ERROR SparkUncaughtExceptionHandler: Uncaught exception
> in thread Thread[ExecutorRunner for app-202005061247
= true
20/05/08 15:36:55 INFO Worker: Driver driver-20200508153502-1291 was killed
by user
*20/05/08 15:43:06 WARN AbstractChannelHandlerContext: An exception
'java.lang.OutOfMemoryError: Java heap space' [enable DEBUG level for full
stacktrace] was thrown by a user handler's exceptionCaught() method
-9846/3,5,main]*
*java.lang.OutOfMemoryError: Java heap space*
* at org.apache.xerces.xni.XMLString.toString(Unknown Source)*
at org.apache.xerces.parsers.AbstractDOMParser.characters(Unknown Source)
at org.apache.xerces.xinclude.XIncludeHandler.characters(Unknown Source
t; in thread Thread[ExecutorRunner for app-20200506124717-10226/0,5,main]
>
> java.lang.OutOfMemoryError: Java heap space
>
> at org.apache.xerces.util.XMLStringBuffer.append(Unknown Source)
>
> at org.apache.xerces.impl.XMLEntityScanner.sc
]
java.lang.OutOfMemoryError: Java heap space
at org.apache.xerces.util.XMLStringBuffer.append(Unknown Source)
at org.apache.xerces.impl.XMLEntityScanner.scanData(Unknown Source)
at org.apache.xerces.impl.XMLScanner.scanComment(Unknown Source)
at
org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanComment
n in finally: Java heap space
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
~[na:1.8.0_162]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_162]
at
org.apache.spark.broadcast.TorrentBroadcast$$anonfun$4.apply(TorrentBroadca
Hi;
I am trying multiclass text classification with Randomforest Classifier on
my local computer(16 GB RAM, 4 physical cores ).
When i run with the parameters below, i am getting
"java.lang.OutOfMemoryError: GC overhead limit exceeded" error.
spark-submit --driver-memory 1G --dri
10:09:26 INFO BlockManagerInfo: Removed taskresult_362 on
ip-...-45.dev:40963 in memory (size: 5.2 MB, free: 8.9 GB)
17/04/24 10:09:26 INFO TaskSetManager: Finished task 125.0 in stage 1.0
(TID 359) in 4383 ms on ip-...-45.dev (125/234)
#
# java.lang.OutOfMemoryError: Java heap space
Hi,
I have 1 master and 4 slave node. Input data size is 14GB.
Slave Node config : 32GB Ram,16 core
I am trying to train word embedding model using spark. It is going out of
memory. To train 14GB of data how much memory do i require?.
I have givem 20gb per executor but below shows it is using
ngested is only 506kb.
>
>
> *16/11/23 03:05:54 INFO MappedDStream: Slicing from 1479850537180 ms to
> 1479850537235 ms (aligned to 1479850537180 ms and 1479850537235 ms)*
>
> *Exception in thread "streaming-job-executor-0"
> java.lang.OutOfMemoryError: unable to create n
in thread "streaming-job-executor-0" java.lang.OutOfMemoryError:
unable to create new native thread*
I looked it up and found out that it could be related to ulimit, I even
increased the ulimit to 1 but still the same error.
Regards
Mohit
Here is a UI of my thread dump.
http://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMTYvMTEvMS8tLWpzdGFja19kdW1wX3dpbmRvd19pbnRlcnZhbF8xbWluX2JhdGNoX2ludGVydmFsXzFzLnR4dC0tNi0xNy00Ng==
On Mon, Oct 31, 2016 at 10:32 PM, kant kodali wrote:
> Hi Vadim,
>
> Thank you so
Hi Vadim,
Thank you so much this was a very useful command. This conversation is
going on here
https://www.mail-archive.com/user@spark.apache.org/msg58656.html
or you can just google "
why spark driver program is creating so many threads? How can I limit this
number?
Have you tried to get number of threads in a running process using `cat
/proc//status` ?
On Sun, Oct 30, 2016 at 11:04 PM, kant kodali wrote:
> yes I did run ps -ef | grep "app_name" and it is root.
>
>
>
> On Sun, Oct 30, 2016 at 8:00 PM, Chan Chor Pang
yes I did run ps -ef | grep "app_name" and it is root.
On Sun, Oct 30, 2016 at 8:00 PM, Chan Chor Pang
wrote:
> sorry, the UID
>
> On 10/31/16 11:59 AM, Chan Chor Pang wrote:
>
> actually if the max user processes is not the problem, i have no idea
>
> but i still
sorry, the UID
On 10/31/16 11:59 AM, Chan Chor Pang wrote:
actually if the max user processes is not the problem, i have no idea
but i still suspecting the user,
as the user who run spark-submit is not necessary the pid for the JVM
process
can u make sure when you "ps -ef | grep {your app
actually if the max user processes is not the problem, i have no idea
but i still suspecting the user,
as the user who run spark-submit is not necessary the pid for the JVM
process
can u make sure when you "ps -ef | grep {your app id} " the PID is root?
On 10/31/16 11:21 AM, kant kodali
The java process is run by the root and it has the same config
sudo -i
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i)
I have the same Exception before and the problem fix after i change the
nproc conf.
> max user processes (-u) 120242
↑this config does looks good.
are u sure the user who run ulimit -a is the same user who run the Java
process?
depend on how u submit the job and your setting,
when I did this
cat /proc/sys/kernel/pid_max
I got 32768
On Sun, Oct 30, 2016 at 6:36 PM, kant kodali wrote:
> I believe for ubuntu it is unlimited but I am not 100% sure (I just read
> somewhere online). I ran ulimit -a and this is what I get
>
> core file size
I believe for ubuntu it is unlimited but I am not 100% sure (I just read
somewhere online). I ran ulimit -a and this is what I get
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f)
not sure for ubuntu, but i think you can just create the file by yourself
the syntax will be the same as /etc/security/limits.conf
nproc.conf not only limit java process but all process by the same user
so even the jvm process does nothing, if the corresponding user is busy
in other way
the
On Sun, Oct 30, 2016 at 5:22 PM, Chan Chor Pang
wrote:
> /etc/security/limits.d/90-nproc.conf
>
Hi,
I am using Ubuntu 16.04 LTS. I have this directory /etc/security/limits.d/
but I don't have any files underneath it. This error happens after running
for 4 to 5 hours. I
you may want to check the process limit of the user who responsible for
starting the JVM.
/etc/security/limits.d/90-nproc.conf
On 10/29/16 4:47 AM, kant kodali wrote:
"dag-scheduler-event-loop" java.lang.OutOfMemoryError: unable to
create new native thread
at java.lang.Thr
ler-event-loop" java.lang.OutOfMemoryError: unable to create
> new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at scala.concurrent.forkjoin.ForkJoinPool.tryAddW
"dag-scheduler-event-loop" java.lang.OutOfMemoryError: unable to create
new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at scala.concurrent.forkjoin.ForkJoinPool.tryAddWorker(
ForkJoinPool
Hi,
Need a help to figure out and solve heap space problem.
I have query which contains 15+ table and when i trying to print out the
result(Just 23 rows) it throws heap space error.
Following command i tried in standalone mode:
(My mac machine having 8 core and 15GB ram)
, **stage 12.0 is the first Iteration
stage*
*). After 4 retries, the Job indeed failes and get aborted*
16/08/31 23:53:03 WARN TaskSetManager: Lost task 12.0 in stage 2.0 (TID
3312, cloud-15): java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.lang.Integer.valueOf(Integer.java:832
; >> >
> >> > There is a big table (5.6 Billion rows, 450Gb in memory) loaded into
> 300
> >> > executors's memory in SparkSQL, on which we would do some calculation
> >> > using
> >> > UDFs in pyspark.
> >> > If I run my SQL on on
n
>> > using
>> > UDFs in pyspark.
>> > If I run my SQL on only a portion of the data (filtering by one of the
>> > attributes), let's say 800 million records, then all works well. But
>> > when I
>> > run the same SQL on all the data, then I receive
&g
the data (filtering by one of the
> > attributes), let's say 800 million records, then all works well. But
> when I
> > run the same SQL on all the data, then I receive
> > "java.lang.OutOfMemoryError: GC overhead limit exceeded" from basically
> all
> > o
cutors's memory in SparkSQL, on which we would do some calculation using
> UDFs in pyspark.
> If I run my SQL on only a portion of the data (filtering by one of the
> attributes), let's say 800 million records, then all works well. But when I
> run the same SQL on all the data, then I re
a portion of the data (filtering by one of the
attributes), let's say 800 million records, then all works well. But when I
run the same SQL on all the data, then I receive "*java.lang.OutOfMemoryError:
GC overhead limit exceeded"* from basically all of the executors.
It seems to me that py
on: Main class [org.apache.oozie.action.hadoop.SparkMain],
>>> main() threw exception, PermGen space
>>> 2016-08-03 22:33:43,319 WARN SparkActionExecutor:523 -
>>> SERVER[ip-10-0-0-161.ec2.internal] USER[hadoop] GROUP[-] TOKEN[]
>>> APP[ApprouteOozie] JOB[031-160
6-08-03 22:33:43,319 WARN SparkActionExecutor:523 -
>> SERVER[ip-10-0-0-161.ec2.internal] USER[hadoop] GROUP[-] TOKEN[]
>> APP[ApprouteOozie] JOB[031-160803180548580-oozie-oozi-W]
>> ACTION[031-160803180548580-oozie-oozi-W@spark-approu
60803180548580-oozie-oozi-W]
> ACTION[031-160803180548580-oozie-oozi-W@spark-approute] Launcher
> exception: PermGen space
> java.lang.OutOfMemoryError: PermGen space
>
> oozie-oozi-W@spark-approute] Launcher exception: PermGen space
> java.lang.OutOfMemoryError: PermGe
[]
APP[ApprouteOozie] JOB[031-160803180548580-oozie-oozi-W]
ACTION[031-160803180548580-oozie-oozi-W@spark-approute] Launcher
exception: PermGen space
java.lang.OutOfMemoryError: PermGen space
oozie-oozi-W@spark-approute] Launcher exception: PermGen space
java.lang.OutOfMemory
ead "dispatcher-event-loop-1"
java.lang.OutOfMemoryError: Java heap space
> How much heap memory do you give the driver ?
>
> On Fri, Jul 22, 2016 at 2:17 PM, Andy Davidson <a...@santacruzintegration.com>
> wrote:
>> Given I get a stack trace in my python notebook I am
TaskSetManager: Stage 146 contains a task of very
> large size (145 KB). The maximum recommended task size is 100 KB.
>
> 16/07/22 18:39:47 WARN HeartbeatReceiver: Removing executor 2 with no
> recent heartbeats: 153037 ms exceeds timeout 12 ms
>
> Excepti
skSetManager: Stage 146 contains a task of very
large size (145 KB). The maximum recommended task size is 100 KB.
16/07/22 18:39:47 WARN HeartbeatReceiver: Removing executor 2 with no recent
heartbeats: 153037 ms exceeds timeout 12 ms
Exception in thread "dispatcher-event-loop-1"
Did you try with different driver's memory? Increasing driver's memory can be
one option. Can you print the GC and post the GC times?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-OutOfMemoryError-related-to-Graphframe-bfs-tp27318p27347.html
GB Ubuntu server...
>>>>>
>>>>> I have changed things in the conf file, but it looks like Spark does not
>>>>> care, so I wonder if my issues are with the driver or executor.
>>>>>
>>>>> I set:
>>>
ues are with the driver or executor.
>>>>
>>>> I set:
>>>>
>>>> spark.driver.memory 20g
>>>> spark.executor.memory 20g
>>>> And, whatever I do, the crash is always at the same spot in the app, which
>>>
:
>>>
>>> spark.driver.memory 20g
>>> spark.executor.memory 20g
>>> And, whatever I do, the crash is always at the same spot in the app, which
>>> makes me think that it is a driver pro
ame spot in the app, which
>> makes me think that it is a driver problem.
>>
>> The exception I get is:
>>
>> 16/07/13 20:36:30 WARN TaskSetManager: Lost task 0.0 in stage 7.0 (TID 208,
>> micha.nc.rr.com): java.lang.OutOfMemoryError: Java heap space
>> at java.nio.H
gt;
> The exception I get is:
>
> 16/07/13 20:36:30 WARN TaskSetManager: Lost task 0.0 in stage 7.0 (TID 208,
> micha.nc.rr.com): java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapCharBuffer.(HeapCharBuffer.java:57)
> at java.n
): java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapCharBuffer.(HeapCharBuffer.java:57)
at java.nio.CharBuffer.allocate(CharBuffer.java:335)
at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:810)
at org.apache.hadoop.io.Text.decode(Text.java:412
ActorSystem [sparkDriver]
java.lang.OutOfMemoryError: Java heap space
at
com.google.protobuf.AbstractMessageLite.toByteArray(AbstractMessageLite.java:62)
at
akka.remote.transport.AkkaPduProtobufCodec$.constructMessage(AkkaPduCodec.scala:138)
at akka.remote.EndpointWriter.writeSend(Endpoint.scala:740
Hi, All
I'm joining a small table (about 200m) with a huge table using broadcast join,
however, spark throw the exception as follows:
16/03/20 22:32:06 WARN TransportChannelHandler: Exception in connection from
java.lang.OutOfMemoryError: Direct buffer memory
For uniform partitioning, you can try custom Partitioner.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-OutOfMemoryError-Requested-array-size-exceeds-VM-limit-tp16809p26477.html
Sent from the Apache Spark User List mailing list archive at
t;
>>
>> *From:* Shuai Zheng [mailto:szheng.c...@gmail.com]
>> *Sent:* Wednesday, November 04, 2015 3:22 PM
>> *To:* user@spark.apache.org
>> *Subject:* [Spark 1.5]: Exception in thread "broadcast-hash-join-2"
>> java.lang.OutOfMemoryError: Java heap
s proven
>>> that there is no issue on the logic and data, it is caused by the new
>>> version of Spark.
>>>
>>>
>>>
>>> So I want to know any new setup I should set in Spark 1.5 to make it
>>> work?
>>>
>>>
>>>
>>> R
he partitioned dataset successfully. I can see the output in
HDFS once all Spark tasks are done.
After the spark tasks are done, the job appears to be running for
over an hour, until I get the following (full stack trace below):
java.lang.OutOfMemoryError: GC o
I can see the output in HDFS once all Spark tasks
>> are done.
>>
>> After the spark tasks are done, the job appears to be running for over an
>> hour, until I get the following (full stack trace below):
>>
>> java.lang.OutOfMemoryError: GC overhead limit exc
an
hour, until I get the following (full stack trace below):
java.lang.OutOfMemoryError: GC overhead limit exceeded
at
org.apache.parquet.format.converter.ParquetMetadataConverter.toParquetStatistics(ParquetMetadataConverter.java:238)
I had set the driver memory to be 20GB.
I attempted
'An error occurred while calling {0}{1}{2}.\n'.
--> 300 format(target_id, '.', name), value)
301 else:
302 raise Py4JError(
Py4JJavaError: An error occurred while calling o65.partitions.
: java.lang.OutOfMemoryError: GC overhead limit exceed
val Patel [mailto:dhaval1...@gmail.com]
Sent: Saturday, November 7, 2015 12:26 AM
To: Spark User Group
Subject: [sparkR] Any insight on java.lang.OutOfMemoryError: GC overhead limit
exceeded
I have been struggling through this error since past 3 days and have tried all
possible ways/suggest
cast_2_piece0 on
localhost:39562 in memory (size: 2.4 KB, free: 530.0 MB)
15/11/06 10:45:20 INFO ContextCleaner: Cleaned accumulator 2
15/11/06 10:45:53 WARN ServletHandler: Error for /static/timeline-view.css
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.zip.Zip
oin-2"
java.lang.OutOfMemoryError: Java heap space
Hi All,
I have a program which actually run a bit complex business (join) in spark.
And I have below exception:
I running on Spark 1.5, and with parameter:
spark-submit --deploy-mode client --executor-cores=24 --driver-memory=2G
uot;).set("spark.sql.autoBroadcastJoinThreshold",
"104857600");
This is running on AWS c3*8xlarge instance. I am not sure what kind of
parameter I should set if I have below OutOfMemoryError exception.
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="kill -9
I get java.lang.OutOfMemoryError: GC overhead limit exceeded when trying
coutn action on a file.
The file is a CSV file 217GB zise
Im using a 10 r3.8xlarge(ubuntu) machines cdh 5.3.6 and spark 1.2.0
configutation:
spark.app.id:local-1443956477103
spark.app.name:Spark shell
spark.cores.max
1.2.0 is quite old.
You may want to try 1.5.1 which was released in the past week.
Cheers
> On Oct 4, 2015, at 4:26 AM, t_ras <marti...@netvision.net.il> wrote:
>
> I get java.lang.OutOfMemoryError: GC overhead limit exceeded when trying
> coutn action on a file.
>
>
Sorry to answer your question fully.
The job starts tasks and few of them fail and some are successful. The
failed one have that PermGen error in logs.
But ultimately full job is marked fail and session quits.
On Sun, Sep 13, 2015 at 10:48 AM, Jagat Singh wrote:
> Hi
Hi Davies,
This was first query on new version.
The one which ran successfully was Spark Pi example
./bin/spark-submit --class org.apache.spark.examples.SparkPi \
--master yarn-client \
--num-executors 3 \
--driver-memory 4g \
--executor-memory 2g \
--executor-cores 1 \
Did this happen immediately after you start the cluster or after ran
some queries?
Is this in local mode or cluster mode?
On Fri, Sep 11, 2015 at 3:00 AM, Jagat Singh wrote:
> Hi,
>
> We have queries which were running fine on 1.4.1 system.
>
> We are testing upgrade and
Hi,
We have queries which were running fine on 1.4.1 system.
We are testing upgrade and even simple query like
val t1= sqlContext.sql("select count(*) from table")
t1.show
This works perfectly fine on 1.4.1 but throws OOM error in 1.5.0
Are there any changes in default memory settings from
Have you seen this thread ?
http://search-hadoop.com/m/q3RTtPPuSvBu0rj2
> On Sep 11, 2015, at 3:00 AM, Jagat Singh wrote:
>
> Hi,
>
> We have queries which were running fine on 1.4.1 system.
>
> We are testing upgrade and even simple query like
> val t1=
: Samstag, 11. Juli 2015 03:58
An: Ted Yu; Robin East; user
Betreff: Re: Spark GraphX memory requirements + java.lang.OutOfMemoryError: GC
overhead limit exceeded
Hello again.
So I could compute triangle numbers when run the code from spark shell without
workers (with --driver-memory 15g option
:
15/07/15 18:24:05 WARN scheduler.TaskSetManager: Lost task 267.0 in stage
0.0 (TID 267, psh-11.nse.ir): java.lang.OutOfMemoryError: GC overhead limit
exceeded
It seems that the map function keeps the hashDocs RDD in the memory and when
the memory is filled in an executor, the application
the html that has the shortest URL. However, after
running for 2-3 hours the application crashes due to memory issue. Here is
the exception:
15/07/15 18:24:05 WARN scheduler.TaskSetManager: Lost task 267.0 in stage 0.0
(TID 267, psh-11.nse.ir): java.lang.OutOfMemoryError: GC overhead limit
): java.lang.OutOfMemoryError: GC
overhead limit exceeded
It seems that the map function keeps the hashDocs RDD in the memory
and when the memory is filled in an executor, the application crashes.
Persisting the map output to disk solves the problem. Adding the
following line between map and reduce solve
: (0 + 8)
/ 32]15/07/11 01:48:45 WARN TaskSetManager: Lost task 2.0 in stage 7.0 (TID
130, 192.168.0.28): io.netty.handler.codec.DecoderException:
java.lang.OutOfMemoryError
at
io.netty.handler.codec.ByteToMessageDecoder.channelRead
Stati,
Change SPARK_REPL_OPTS to SPARK_SUBMIT_OPTS and try again. I faced the same
issue and making this change worked for me. I looked at the spark-shell
file under the bin dir and found SPARK_SUBMIT_OPTS being used.
SPARK_SUBMIT_OPTS=-XX:MaxPermSize=256m bin/spark-shell --master
Ok, but what does it means? I did not change the core files of spark, so is
it a bug there?
PS: on small datasets (500 Mb) I have no problem.
Am 25.06.2015 18:02 schrieb Ted Yu yuzhih...@gmail.com:
The assertion failure from TriangleCount.scala corresponds with the
following lines:
You’ll get this issue if you just take the first 2000 lines of that file. The
problem is triangleCount() expects srdId dstId which is not the case in the
file (e.g. vertex 28). You can get round this by calling
graph.convertToCanonical Edges() which removes bi-directional edges and ensures
Yep, I already found it. So I added 1 line:
val graph = GraphLoader.edgeListFile(sc, , ...)
val newgraph = graph.convertToCanonicalEdges()
and could successfully count triangles on newgraph. Next will test it on
bigger (several Gb) networks.
I am using Spark 1.3 and 1.4 but haven't seen
=256m
as spark-shell input argument?
Roberto
On Wed, Jun 24, 2015 at 5:57 PM, stati srikanth...@gmail.com wrote:
Hello,
I moved from 1.3.1 to 1.4.0 and started receiving
java.lang.OutOfMemoryError: PermGen space when I use spark-shell.
Same Scala code works fine in 1.3.1 spark-shell
Hello!
I am trying to compute number of triangles with GraphX. But get memory
error or heap size, even though the dataset is very small (1Gb). I run the
code in spark-shell, having 16Gb RAM machine (also tried with 2 workers on
separate machines 8Gb RAM each). So I have 15x more memory than the
The assertion failure from TriangleCount.scala corresponds with the
following lines:
g.outerJoinVertices(counters) {
(vid, _, optCounter: Option[Int]) =
val dblCount = optCounter.getOrElse(0)
// double count should be even (divisible by two)
assert((dblCount 1)
to pass it with
--driver-java-options -XX:MaxPermSize=256m
as spark-shell input argument?
Roberto
On Wed, Jun 24, 2015 at 5:57 PM, stati srikanth...@gmail.com wrote:
Hello,
I moved from 1.3.1 to 1.4.0 and started receiving
java.lang.OutOfMemoryError: PermGen space when I use spark
Hello,
I moved from 1.3.1 to 1.4.0 and started receiving
java.lang.OutOfMemoryError: PermGen space when I use spark-shell.
Same Scala code works fine in 1.3.1 spark-shell. I was loading same set of
external JARs and have same imports in 1.3.1.
I tried increasing perm size to 256m. I still got
Did you try to pass it with
--driver-java-options -XX:MaxPermSize=256m
as spark-shell input argument?
Roberto
On Wed, Jun 24, 2015 at 5:57 PM, stati srikanth...@gmail.com wrote:
Hello,
I moved from 1.3.1 to 1.4.0 and started receiving
java.lang.OutOfMemoryError: PermGen space when I
@spark.apache.org
Sent: Thursday, June 11, 2015 8:43 AM
Subject: spark-sql from CLI ---EXCEPTION: java.lang.OutOfMemoryError: Java
heap space
hey guys
Using Hive and Impala daily intensively.Want to transition to spark-sql in CLI
mode
Currently in my sandbox I am using the Spark (standalone mode
best regards
sanjay
From: Josh Rosen rosenvi...@gmail.com
To: Sanjay Subramanian sanjaysubraman...@yahoo.com
Cc: user@spark.apache.org user@spark.apache.org
Sent: Friday, June 12, 2015 7:15 AM
Subject: Re: spark-sql from CLI ---EXCEPTION: java.lang.OutOfMemoryError:
Java heap
java.lang.OutOfMemoryError: GC overhead
limit exceeded.
The job is trying to process a filesize 4.5G.
I've tried following spark configuration:
--num-executors 6 --executor-memory 6G --executor-cores 6 --driver-memory 3G
I tried increasing more cores and executors which sometime works
Hello All,
I have a Spark job that throws java.lang.OutOfMemoryError: GC overhead
limit exceeded.
The job is trying to process a filesize 4.5G.
I've tried following spark configuration:
--num-executors 6 --executor-memory 6G --executor-cores 6 --driver-memory 3G
I tried increasing more
---EXCEPTION: java.lang.OutOfMemoryError:
Java heap space
It sounds like this might be caused by a memory configuration problem. In
addition to looking at the executor memory, I'd also bump up the driver memory,
since it appears that your shell is running out of memory when collecting a
large query
--
*From:* Josh Rosen rosenvi...@gmail.com
*To:* Sanjay Subramanian sanjaysubraman...@yahoo.com
*Cc:* user@spark.apache.org user@spark.apache.org
*Sent:* Friday, June 12, 2015 7:15 AM
*Subject:* Re: spark-sql from CLI ---EXCEPTION:
java.lang.OutOfMemoryError: Java
while handling an exception event ([id: 0x01b99855,
/10.0.0.19:58117 = /10.0.0.19:52016] EXCEPTION: java.lang.OutOfMemoryError:
Java heap space)
java.lang.OutOfMemoryError: Java heap space
at
org.jboss.netty.buffer.HeapChannelBuffer.init(HeapChannelBuffer.java:42
: 0x01b99855,
/10.0.0.19:58117 = /10.0.0.19:52016] EXCEPTION: java.lang.OutOfMemoryError:
Java heap space)
java.lang.OutOfMemoryError: Java heap space
at
org.jboss.netty.buffer.HeapChannelBuffer.init(HeapChannelBuffer.java:42)
at
org.jboss.netty.buffer.BigEndianHeapChannelBuffer.init
handler while handling an exception event ([id: 0x01b99855,
/10.0.0.19:58117 = /10.0.0.19:52016] EXCEPTION: java.lang.OutOfMemoryError:
Java heap space)java.lang.OutOfMemoryError: Java heap space at
org.jboss.netty.buffer.HeapChannelBuffer.init(HeapChannelBuffer.java:42
Hi,
I'm trying to train an SVM on KDD2010 dataset (available from libsvm). But
I'm getting java.lang.OutOfMemoryError: Java heap space error. The dataset
is really sparse and have around 8 million data points and 20 million
features. I'm using a cluster of 8 nodes (each with 8 cores and 64G RAM
Try increasing your driver memory.
Thanks
Best Regards
On Thu, Apr 16, 2015 at 6:09 PM, sarath sarathkrishn...@gmail.com wrote:
Hi,
I'm trying to train an SVM on KDD2010 dataset (available from libsvm). But
I'm getting java.lang.OutOfMemoryError: Java heap space error. The
dataset
am seeing various crashes in spark on large jobs which all share a
similar exception:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
I increased nproc (i.e. ulimit -u) 10 fold, but it doesn't
share a
similar exception:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
I increased nproc (i.e. ulimit -u) 10 fold, but it doesn't help.
Does anyone know how to avoid those kinds of errors
1 - 100 of 161 matches
Mail list logo