where I might make mistake then it would be of great help.
Thank you again for all your help!
Regards,
Arijit
From: Matthias Boehm
Sent: Monday, July 17, 2017 2:09:47 AM
To: dev@systemml.apache.org
Subject: Re: Decaying performance of SystemML
thanks for sh
467.
> Total JVM GC time: 8.237 sec.
> Heavy hitter instructions (name, time, count):
> -- 1) %% 33.235 sec 10030
> -- 2) rmvar 27.762 sec 250750035
> -- 3) == 26.179 sec 100300017
> -- 4) + 15.555 sec 5015
&
sec 50150018
-- 6) sp_seq 0.675 sec 1
-- 7) sp_rshape 0.070 sec 1
-- 8) sp_chkpoint 0.017 sec 3
-- 9) seq 0.014 sec 3
-- 10) rshape 0.003 sec 3
Thank you!
Arijit
____
From: arijit chakraborty
Sent: Wednesda
@systemml.apache.org
Subject: Re: Decaying performance of SystemML
without any specifics of scripts or datasets, it's unfortunately, hard
if not impossible to help you here. However, note that the memory
configuration seems wrong. Why would you configure the driver and
executors with 2TB if you
without any specifics of scripts or datasets, it's unfortunately, hard
if not impossible to help you here. However, note that the memory
configuration seems wrong. Why would you configure the driver and
executors with 2TB if you only have 256GB per node. Maybe you observe an
issue of swapping.
Hi,
I'm creating a process using systemML. But after certain period of time, the
performance decreases.
1) This warning message: WARN TaskSetManager: Stage 25254 contains a task of
very large size (3954 KB). The maximum recommended task size is 100 KB.
2) For Spark, we are implementing this