This looks like a bug in mono's qsort.

It should not need more than 18-36 levels of recursion.

Vlad/John, could you look at this issue?

--
Rodrigo

On Tue, Nov 1, 2016 at 9:35 AM, Burkhard Linke <
bli...@cebitec.uni-bielefeld.de> wrote:

> Hi,
>
>
> the allocation indeed is caused by mmap being unable to create additional
> mappings.
>
>
> With more mapping the application is able to continue, but runs into
> another problem:
>
>
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7f5afe3b8700 (LWP 55986)]
> 0x000000000061cd37 in memcpy (__src=0x7f5ab2e147f8, __dest=0x7f5afe3b6c30,
>     __len=8) at /usr/include/x86_64-linux-gnu/bits/string3.h:52
> 52    }
> (gdb) bt
> #0  0x000000000061cd37 in memcpy (__src=0x7f5ab2e147f8,
> __dest=0x7f5afe3b6c30,
>     __len=8) at /usr/include/x86_64-linux-gnu/bits/string3.h:52
> #1  partition (swap_tmp=0x7f5afe3b6c20 "", pivot_tmp=0x7f5afe3b6c30 "",
> compar=
>     0x60ae60 <block_usage_comparer>, width=8, nel=4517,
> base=0x7f5ab2e10168)
>     at sgen-qsort.c:31
> #2  qsort_rec (base=base@entry=0x7f5ab2e10168, nel=nel@entry=4517,
>     width=width@entry=8, compar=compar@entry=0x60ae60
> <block_usage_comparer>,
>     pivot_tmp=pivot_tmp@entry=0x7f5afe3b6c30 "",
>     swap_tmp=swap_tmp@entry=0x7f5afe3b6c20 "") at sgen-qsort.c:52
> #3  0x000000000061ce7b in qsort_rec (base=base@entry=0x7f5ab2e10168,
>     nel=nel@entry=4518, width=width@entry=8, compar=compar@entry=
>     0x60ae60 <block_usage_comparer>,
>     pivot_tmp=pivot_tmp@entry=0x7f5afe3b6c30 "",
>     swap_tmp=swap_tmp@entry=0x7f5afe3b6c20 "") at sgen-qsort.c:53
> #4  0x000000000061ce7b in qsort_rec (base=base@entry=0x7f5ab2e10168,
>     nel=nel@entry=4519, width=width@entry=8, compar=compar@entry=
>     0x60ae60 <block_usage_comparer>,
>     pivot_tmp=pivot_tmp@entry=0x7f5afe3b6c30 "",
>     swap_tmp=swap_tmp@entry=0x7f5afe3b6c20 "") at sgen-qsort.c:53
> ...
>
> (gdb) bt -20
> #18349 0x000000000061ce7b in qsort_rec (base=0x7f5ab2dbc030,
>     base@entry=0x7f5ab2dbc000, nel=184426, nel@entry=184432,
>     width=width@entry=8, compar=compar@entry=0x60ae60
> <block_usage_comparer>,
>     pivot_tmp=pivot_tmp@entry=0x7f5afe3b6c30 "",
>     swap_tmp=swap_tmp@entry=0x7f5afe3b6c20 "") at sgen-qsort.c:53
> #18350 0x000000000061ce7b in qsort_rec (base=base@entry=0x7f5ab2dbc000,
>     nel=nel@entry=184433, width=width@entry=8, compar=compar@entry=
>     0x60ae60 <block_usage_comparer>,
>     pivot_tmp=pivot_tmp@entry=0x7f5afe3b6c30 "",
>     swap_tmp=swap_tmp@entry=0x7f5afe3b6c20 "") at sgen-qsort.c:53
> #18351 0x000000000061ce7b in qsort_rec (base=base@entry=0x7f5ab2dbc000,
>     nel=nel@entry=229138, width=width@entry=8, compar=compar@entry=
>     0x60ae60 <block_usage_comparer>,
>     pivot_tmp=pivot_tmp@entry=0x7f5afe3b6c30 "",
>     swap_tmp=swap_tmp@entry=0x7f5afe3b6c20 "") at sgen-qsort.c:53
> #18352 0x000000000061cedd in sgen_qsort (base=base@entry=0x7f5ab2dbc000,
>     nel=nel@entry=229138, width=width@entry=8, compar=compar@entry=
>     0x60ae60 <block_usage_comparer>) at sgen-qsort.c:69
> #18353 0x000000000060b7df in sgen_evacuation_freelist_blocks (
>     block_list=0x7f5b8576b300, size_index=10) at sgen-marksweep.c:1860
> #18354 0x000000000060d319 in major_start_major_collection ()
>     at sgen-marksweep.c:1898
> #18355 0x0000000000604f59 in major_start_collection (
> ---Type <return> to continue, or q <return> to quit---
>     reason=reason@entry=0x702fb1 "LOS overflow",
>     concurrent=concurrent@entry=0,
>     old_next_pin_slot=old_next_pin_slot@entry=0x7f5afe3b6d28) at
> sgen-gc.c:1923
> #18356 0x0000000000607678 in major_do_collection (forced=0, is_overflow=0,
>     reason=0x702fb1 "LOS overflow") at sgen-gc.c:2082
> #18357 major_do_collection (reason=0x702fb1 "LOS overflow", is_overflow=0,
>     forced=0) at sgen-gc.c:2065
> #18358 0x0000000000607d44 in sgen_perform_collection (requested_size=43344,
>     generation_to_collect=1, reason=0x702fb1 "LOS overflow",
> wait_to_finish=0,
>     stw=1) at sgen-gc.c:2279
> #18359 0x000000000060823c in sgen_ensure_free_space (size=<optimized out>,
>     generation=<optimized out>) at sgen-gc.c:2232
> #18360 0x000000000060a259 in sgen_los_alloc_large_inner (
>     vtable=vtable@entry=0xe004a8, size=size@entry=43344) at sgen-los.c:379
> #18361 0x00000000005fb580 in sgen_alloc_obj_nolock (
>     vtable=vtable@entry=0xe004a8, size=size@entry=43344) at
> sgen-alloc.c:175
> #18362 0x00000000005e8da1 in mono_gc_alloc_string (vtable=vtable("string"),
>     size=size@entry=43344, len=len@entry=21661) at sgen-mono.c:1833
> #18363 0x00000000005c5025 in mono_string_new_size_checked (domain=0xdd2fe0,
>     len=len@entry=21661, error=error@entry=0x7f5afe3b6eb0) at
> object.c:6074
> #18364 0x0000000000597899 in ves_icall_System_String_InternalAllocateStr (
>     length=21661) at string-icalls.c:41
> #18365 0x00000000405fbed2 in ?? ()
> ---Type <return> to continue, or q <return> to quit---
> #18366 0x00007f5b016fdd78 in ?? ()
> #18367 0x00007f5aaa5c6930 in ?? ()
> #18368 0x0000000000000000 in ?? ()
>
>
> Stack overflow due to 18368 stack frames caused by the recurvise quicksort
> implementation in sgen-qsort.c. The application is creating a high number
> of short lived objects, and the memory is badly fragmented (229138 entries
> in freelist...). Stack size has already been increased to 16M, and GC
> nursery size is set to 2G to cope with the high number of temporary
> objects, which keeps the number of mmap'ed fragments lower (~ 60.000
> instead of ~120.000).
>
> Does mono honor the system stack size limit (and thus allows larger stacks
> for larger values of ulimit -s)?
>
> Regards,
> Burkhard
> _______________________________________________
> Mono-devel-list mailing list
> Mono-devel-list@lists.dot.net
> http://lists.dot.net/mailman/listinfo/mono-devel-list
>
_______________________________________________
Mono-devel-list mailing list
Mono-devel-list@lists.dot.net
http://lists.dot.net/mailman/listinfo/mono-devel-list

Reply via email to