ngested is only 506kb.
>
>
> *16/11/23 03:05:54 INFO MappedDStream: Slicing from 1479850537180 ms to
> 1479850537235 ms (aligned to 1479850537180 ms and 1479850537235 ms)*
>
> *Exception in thread "streaming-job-executor-0"
> java.lang.OutOfMemoryError: unable to create n
in thread "streaming-job-executor-0" java.lang.OutOfMemoryError:
unable to create new native thread*
I looked it up and found out that it could be related to ulimit, I even
increased the ulimit to 1 but still the same error.
Regards
Mohit
Here is a UI of my thread dump.
http://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMTYvMTEvMS8tLWpzdGFja19kdW1wX3dpbmRvd19pbnRlcnZhbF8xbWluX2JhdGNoX2ludGVydmFsXzFzLnR4dC0tNi0xNy00Ng==
On Mon, Oct 31, 2016 at 10:32 PM, kant kodali wrote:
> Hi Vadim,
>
> Thank you so
Hi Vadim,
Thank you so much this was a very useful command. This conversation is
going on here
https://www.mail-archive.com/user@spark.apache.org/msg58656.html
or you can just google "
why spark driver program is creating so many threads? How can I limit this
number?
Have you tried to get number of threads in a running process using `cat
/proc//status` ?
On Sun, Oct 30, 2016 at 11:04 PM, kant kodali wrote:
> yes I did run ps -ef | grep "app_name" and it is root.
>
>
>
> On Sun, Oct 30, 2016 at 8:00 PM, Chan Chor Pang
yes I did run ps -ef | grep "app_name" and it is root.
On Sun, Oct 30, 2016 at 8:00 PM, Chan Chor Pang
wrote:
> sorry, the UID
>
> On 10/31/16 11:59 AM, Chan Chor Pang wrote:
>
> actually if the max user processes is not the problem, i have no idea
>
> but i still
sorry, the UID
On 10/31/16 11:59 AM, Chan Chor Pang wrote:
actually if the max user processes is not the problem, i have no idea
but i still suspecting the user,
as the user who run spark-submit is not necessary the pid for the JVM
process
can u make sure when you "ps -ef | grep {your app
actually if the max user processes is not the problem, i have no idea
but i still suspecting the user,
as the user who run spark-submit is not necessary the pid for the JVM
process
can u make sure when you "ps -ef | grep {your app id} " the PID is root?
On 10/31/16 11:21 AM, kant kodali
The java process is run by the root and it has the same config
sudo -i
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i)
I have the same Exception before and the problem fix after i change the
nproc conf.
> max user processes (-u) 120242
↑this config does looks good.
are u sure the user who run ulimit -a is the same user who run the Java
process?
depend on how u submit the job and your setting,
when I did this
cat /proc/sys/kernel/pid_max
I got 32768
On Sun, Oct 30, 2016 at 6:36 PM, kant kodali wrote:
> I believe for ubuntu it is unlimited but I am not 100% sure (I just read
> somewhere online). I ran ulimit -a and this is what I get
>
> core file size
I believe for ubuntu it is unlimited but I am not 100% sure (I just read
somewhere online). I ran ulimit -a and this is what I get
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f)
not sure for ubuntu, but i think you can just create the file by yourself
the syntax will be the same as /etc/security/limits.conf
nproc.conf not only limit java process but all process by the same user
so even the jvm process does nothing, if the corresponding user is busy
in other way
the
On Sun, Oct 30, 2016 at 5:22 PM, Chan Chor Pang
wrote:
> /etc/security/limits.d/90-nproc.conf
>
Hi,
I am using Ubuntu 16.04 LTS. I have this directory /etc/security/limits.d/
but I don't have any files underneath it. This error happens after running
for 4 to 5 hours. I
you may want to check the process limit of the user who responsible for
starting the JVM.
/etc/security/limits.d/90-nproc.conf
On 10/29/16 4:47 AM, kant kodali wrote:
"dag-scheduler-event-loop" java.lang.OutOfMemoryError: unable to
create new native thread
at java.lang.Thr
ler-event-loop" java.lang.OutOfMemoryError: unable to create
> new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at scala.concurrent.forkjoin.ForkJoinPool.tryAddW
"dag-scheduler-event-loop" java.lang.OutOfMemoryError: unable to create
new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at scala.concurrent.forkjoin.ForkJoinPool.tryAddWorker(
ForkJoinPool
am seeing various crashes in spark on large jobs which all share a
similar exception:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
I increased nproc (i.e. ulimit -u) 10 fold, but it doesn't
share a
similar exception:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
I increased nproc (i.e. ulimit -u) 10 fold, but it doesn't help.
Does anyone know how to avoid those kinds of errors
a
similar exception:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
I increased nproc (i.e. ulimit -u) 10 fold, but it doesn't help.
Does anyone know how to avoid those kinds of errors?
Noteworthy
in spark on large jobs which all share a
similar exception:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
I increased nproc (i.e. ulimit -u) 10 fold, but it doesn't help.
Does anyone know
Hello,
I am seeing various crashes in spark on large jobs which all share a
similar exception:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
I increased nproc (i.e. ulimit -u) 10 fold
?
If so then I think you need to make more, smaller executors instead?
On Tue, Mar 24, 2015 at 7:38 PM, Thomas Gerber thomas.ger...@radius.com wrote:
Hello,
I am seeing various crashes in spark on large jobs which all share a similar
exception:
java.lang.OutOfMemoryError: unable to create new
in spark on large jobs which all share a
similar exception:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
I increased nproc (i.e. ulimit -u) 10 fold, but it doesn't help.
Does anyone know
Hi
I am trying the Spark sample program “SparkPi”, I got an error unable to
create new native thread, how to resolve this?
14/09/11 21:36:16 INFO scheduler.DAGScheduler: Completed ResultTask(0, 644)
14/09/11 21:36:16 INFO scheduler.TaskSetManager: Finished TID 643 in 43 ms on
node1 (progress
(Method.java:622)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:256)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:54)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
*Caused by: java.lang.OutOfMemoryError: unable to create new native thread
26 matches
Mail list logo