Here is a UI of my thread dump.
http://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMTYvMTEvMS8tLWpzdGFja19kdW1wX3dpbmRvd19pbnRlcnZhbF8xbWluX2JhdGNoX2ludGVydmFsXzFzLnR4dC0tNi0xNy00Ng==
On Mon, Oct 31, 2016 at 10:32 PM, kant kodali wrote:
> Hi Vadim,
>
> Thank you so
Hi Vadim,
Thank you so much this was a very useful command. This conversation is
going on here
https://www.mail-archive.com/user@spark.apache.org/msg58656.html
or you can just google "
why spark driver program is creating so many threads? How can I limit this
number?
Have you tried to get number of threads in a running process using `cat
/proc//status` ?
On Sun, Oct 30, 2016 at 11:04 PM, kant kodali wrote:
> yes I did run ps -ef | grep "app_name" and it is root.
>
>
>
> On Sun, Oct 30, 2016 at 8:00 PM, Chan Chor Pang
yes I did run ps -ef | grep "app_name" and it is root.
On Sun, Oct 30, 2016 at 8:00 PM, Chan Chor Pang
wrote:
> sorry, the UID
>
> On 10/31/16 11:59 AM, Chan Chor Pang wrote:
>
> actually if the max user processes is not the problem, i have no idea
>
> but i still
sorry, the UID
On 10/31/16 11:59 AM, Chan Chor Pang wrote:
actually if the max user processes is not the problem, i have no idea
but i still suspecting the user,
as the user who run spark-submit is not necessary the pid for the JVM
process
can u make sure when you "ps -ef | grep {your app
actually if the max user processes is not the problem, i have no idea
but i still suspecting the user,
as the user who run spark-submit is not necessary the pid for the JVM
process
can u make sure when you "ps -ef | grep {your app id} " the PID is root?
On 10/31/16 11:21 AM, kant kodali
The java process is run by the root and it has the same config
sudo -i
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i)
I have the same Exception before and the problem fix after i change the
nproc conf.
> max user processes (-u) 120242
↑this config does looks good.
are u sure the user who run ulimit -a is the same user who run the Java
process?
depend on how u submit the job and your setting,
when I did this
cat /proc/sys/kernel/pid_max
I got 32768
On Sun, Oct 30, 2016 at 6:36 PM, kant kodali wrote:
> I believe for ubuntu it is unlimited but I am not 100% sure (I just read
> somewhere online). I ran ulimit -a and this is what I get
>
> core file size
I believe for ubuntu it is unlimited but I am not 100% sure (I just read
somewhere online). I ran ulimit -a and this is what I get
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f)
not sure for ubuntu, but i think you can just create the file by yourself
the syntax will be the same as /etc/security/limits.conf
nproc.conf not only limit java process but all process by the same user
so even the jvm process does nothing, if the corresponding user is busy
in other way
the
On Sun, Oct 30, 2016 at 5:22 PM, Chan Chor Pang
wrote:
> /etc/security/limits.d/90-nproc.conf
>
Hi,
I am using Ubuntu 16.04 LTS. I have this directory /etc/security/limits.d/
but I don't have any files underneath it. This error happens after running
for 4 to 5 hours. I
you may want to check the process limit of the user who responsible for
starting the JVM.
/etc/security/limits.d/90-nproc.conf
On 10/29/16 4:47 AM, kant kodali wrote:
"dag-scheduler-event-loop" java.lang.OutOfMemoryError: unable to
create new native thread
at
Another thing I forgot to mention is that it happens after running for
several hours say (4 to 5 hours) I am not sure why it is creating so many
threads? any way to control them?
On Fri, Oct 28, 2016 at 12:47 PM, kant kodali wrote:
> "dag-scheduler-event-loop"
I have a YARN cluster where the max memory allowed is 16GB. I set 12G for
my driver, however i see OutOFMemory error even for this program
http://spark.apache.org/docs/1.3.0/sql-programming-guide.html#hive-tables .
What do you suggest ?
On Wed, Mar 25, 2015 at 8:23 AM, Thomas Gerber
This is a different kind of error. Thomas' OOM error was specific to the
kernel refusing to create another thread/process for his application.
Matthew
On Wed, Mar 25, 2015 at 10:51 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) deepuj...@gmail.com wrote:
I have a YARN cluster where the max memory allowed is 16GB. I set
My memory is hazy on this but aren't there hidden limitations to
Linux-based threads? I ran into some issues a couple of years ago where,
and here is the fuzzy part, the kernel wants to reserve virtual memory per
thread equal to the stack size. When the total amount of reserved memory
(not
So,
1. I reduced my -XX:ThreadStackSize to 5m (instead of 10m - default is
1m), which is still OK for my need.
2. I reduced the executor memory to 44GB for a 60GB machine (instead of
49GB).
This seems to have helped. Thanks to Matthew and Sean.
Thomas
On Tue, Mar 24, 2015 at 3:49 PM, Matt
I doubt you're hitting the limit of threads you can spawn, but as you
say, running out of memory that the JVM process is allowed to allocate
since your threads are grabbing stacks 10x bigger than usual. The
thread stacks are 4GB by themselves.
I suppose you can't not up the stack size so much?
Additional notes:
I did not find anything wrong with the number of threads (ps -u USER -L |
wc -l): around 780 on the master and 400 on executors. I am running on 100
r3.2xlarge.
On Tue, Mar 24, 2015 at 12:38 PM, Thomas Gerber thomas.ger...@radius.com
wrote:
Hello,
I am seeing various crashes
20 matches
Mail list logo