Re: Error report file is deleted automatically after spark application finished

2016-06-30 Thread dhruve ashar
There could be multiple of them, why its not being generated even after
setting the ulimit appropriately.

Try out the options listed on this thread:
http://stackoverflow.com/questions/7732983/core-dump-file-is-not-generated


On Thu, Jun 30, 2016 at 2:25 AM, prateek arora 
wrote:

> Thanks for the information. My problem is resolved now .
>
>
>
> I have one more issue.
>
>
>
> I am not able to save core dump file. Always shows *“# Failed to write
> core dump. Core dumps have been disabled. To enable core dumping, try
> "ulimit -c unlimited" before starting Java again"*
>
>
>
> I set core dump limit to unlimited in all nodes. Using below settings
>Edit /etc/security/limits.conf file and add  " * soft core unlimited "
> line.
>
> I rechecked  using :  $ ulimit -all
>
> core file size  (blocks, -c) unlimited
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 241204
> max locked memory   (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 241204
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
>
> but when my spark application  crash ,  show error " Failed to
> write core dump. Core dumps have been disabled. To enablecore dumping, try
> "ulimit -c unlimited" before starting Java again”.
>
>
> Regards
>
> Prateek
>
>
>
>
>
> On Wed, Jun 29, 2016 at 9:30 PM, dhruve ashar 
> wrote:
>
>> You can look at the yarn-default configuration file.
>>
>> Check your log related settings to see if log aggregation is enabled or
>> also the log retention duration to see if its too small and files are being
>> deleted.
>>
>> On Wed, Jun 29, 2016 at 4:47 PM, prateek arora <
>> prateek.arora...@gmail.com> wrote:
>>
>>>
>>> Hi
>>>
>>> My Spark application was crashed and show information
>>>
>>> LogType:stdout
>>> Log Upload Time:Wed Jun 29 14:38:03 -0700 2016
>>> LogLength:1096
>>> Log Contents:
>>> #
>>> # A fatal error has been detected by the Java Runtime Environment:
>>> #
>>> #  SIGILL (0x4) at pc=0x7f67baa0d221, pid=12207, tid=140083473176320
>>> #
>>> # JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build
>>> 1.7.0_67-b01)
>>> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode
>>> linux-amd64 compressed oops)
>>> # Problematic frame:
>>> # C  [libcaffe.so.1.0.0-rc3+0x786221]  sgemm_kernel+0x21
>>> #
>>> # Failed to write core dump. Core dumps have been disabled. To enable
>>> core
>>> dumping, try "ulimit -c unlimited" before starting Java again
>>> #
>>> # An error report file with more information is saved as:
>>> #
>>>
>>> /yarn/nm/usercache/ubuntu/appcache/application_1467236060045_0001/container_1467236060045_0001_01_03/hs_err_pid12207.log
>>>
>>>
>>>
>>> but I am not able to found
>>>
>>> "/yarn/nm/usercache/ubuntu/appcache/application_1467236060045_0001/container_1467236060045_0001_01_03/hs_err_pid12207.log"
>>> file . its deleted  automatically after Spark application
>>>  finished
>>>
>>>
>>> how  to retain report file , i am running spark with yarn .
>>>
>>> Regards
>>> Prateek
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/Error-report-file-is-deleted-automatically-after-spark-application-finished-tp27247.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> -
>>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>>
>>>
>>
>>
>> --
>> -Dhruve Ashar
>>
>>
>


-- 
-Dhruve Ashar


Re: Error report file is deleted automatically after spark application finished

2016-06-30 Thread prateek arora
Thanks for the information. My problem is resolved now .



I have one more issue.



I am not able to save core dump file. Always shows *“# Failed to write core
dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c
unlimited" before starting Java again"*



I set core dump limit to unlimited in all nodes. Using below settings
   Edit /etc/security/limits.conf file and add  " * soft core unlimited "
line.

I rechecked  using :  $ ulimit -all

core file size  (blocks, -c) unlimited
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 241204
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 241204
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

but when my spark application  crash ,  show error " Failed to
write core dump. Core dumps have been disabled. To enablecore dumping, try
"ulimit -c unlimited" before starting Java again”.


Regards

Prateek





On Wed, Jun 29, 2016 at 9:30 PM, dhruve ashar  wrote:

> You can look at the yarn-default configuration file.
>
> Check your log related settings to see if log aggregation is enabled or
> also the log retention duration to see if its too small and files are being
> deleted.
>
> On Wed, Jun 29, 2016 at 4:47 PM, prateek arora  > wrote:
>
>>
>> Hi
>>
>> My Spark application was crashed and show information
>>
>> LogType:stdout
>> Log Upload Time:Wed Jun 29 14:38:03 -0700 2016
>> LogLength:1096
>> Log Contents:
>> #
>> # A fatal error has been detected by the Java Runtime Environment:
>> #
>> #  SIGILL (0x4) at pc=0x7f67baa0d221, pid=12207, tid=140083473176320
>> #
>> # JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build
>> 1.7.0_67-b01)
>> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode
>> linux-amd64 compressed oops)
>> # Problematic frame:
>> # C  [libcaffe.so.1.0.0-rc3+0x786221]  sgemm_kernel+0x21
>> #
>> # Failed to write core dump. Core dumps have been disabled. To enable core
>> dumping, try "ulimit -c unlimited" before starting Java again
>> #
>> # An error report file with more information is saved as:
>> #
>>
>> /yarn/nm/usercache/ubuntu/appcache/application_1467236060045_0001/container_1467236060045_0001_01_03/hs_err_pid12207.log
>>
>>
>>
>> but I am not able to found
>>
>> "/yarn/nm/usercache/ubuntu/appcache/application_1467236060045_0001/container_1467236060045_0001_01_03/hs_err_pid12207.log"
>> file . its deleted  automatically after Spark application
>>  finished
>>
>>
>> how  to retain report file , i am running spark with yarn .
>>
>> Regards
>> Prateek
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Error-report-file-is-deleted-automatically-after-spark-application-finished-tp27247.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> -
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>
>
>
> --
> -Dhruve Ashar
>
>


Re: Error report file is deleted automatically after spark application finished

2016-06-29 Thread dhruve ashar
You can look at the yarn-default configuration file.

Check your log related settings to see if log aggregation is enabled or
also the log retention duration to see if its too small and files are being
deleted.

On Wed, Jun 29, 2016 at 4:47 PM, prateek arora 
wrote:

>
> Hi
>
> My Spark application was crashed and show information
>
> LogType:stdout
> Log Upload Time:Wed Jun 29 14:38:03 -0700 2016
> LogLength:1096
> Log Contents:
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGILL (0x4) at pc=0x7f67baa0d221, pid=12207, tid=140083473176320
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build
> 1.7.0_67-b01)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libcaffe.so.1.0.0-rc3+0x786221]  sgemm_kernel+0x21
> #
> # Failed to write core dump. Core dumps have been disabled. To enable core
> dumping, try "ulimit -c unlimited" before starting Java again
> #
> # An error report file with more information is saved as:
> #
>
> /yarn/nm/usercache/ubuntu/appcache/application_1467236060045_0001/container_1467236060045_0001_01_03/hs_err_pid12207.log
>
>
>
> but I am not able to found
>
> "/yarn/nm/usercache/ubuntu/appcache/application_1467236060045_0001/container_1467236060045_0001_01_03/hs_err_pid12207.log"
> file . its deleted  automatically after Spark application
>  finished
>
>
> how  to retain report file , i am running spark with yarn .
>
> Regards
> Prateek
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Error-report-file-is-deleted-automatically-after-spark-application-finished-tp27247.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


-- 
-Dhruve Ashar