Hi,

we have encountered a similar problem using an application for large scale data processing. After allocating about 30 GB RAM the application crashed with SIGSEGV:


(gdb) bt
#0  alloc_sb (desc=0x7f3a1696a8c0) at lock-free-alloc.c:146
#1 alloc_from_new_sb (heap=0x987e70 <allocators+48>) at lock-free-alloc.c:411
#2  mono_lock_free_alloc (heap=0x987e70 <allocators+48>)
    at lock-free-alloc.c:440
#3  0x0000000000609b17 in sgen_alloc_internal_dynamic (size=size@entry=32,
    type=type@entry=11, assert_on_failure=assert_on_failure@entry=1)
    at sgen-internal.c:171
#4  0x000000000061ab03 in sgen_pin_stats_register_address (
addr=addr@entry=0x7f3a1eb6d048 "\350/\016\003", pin_type=pin_type@entry=2)
    at sgen-pinning-stats.c:93
#5  0x00000000006046e2 in sgen_conservatively_pin_objects_from (
    start=0x30ead10, end=0x30eae10,
    start_nursery=start_nursery@entry=0x7f3a1e800000,
end_nursery=end_nursery@entry=0x7f3a1ec00000, pin_type=pin_type@entry=2)
    at sgen-gc.c:845
#6  0x0000000000604779 in pin_from_roots (start_nursery=0x7f3a1e800000,
    end_nursery=end_nursery@entry=0x7f3a1ec00000, ctx=...) at sgen-gc.c:868
#7  0x0000000000607874 in collect_nursery (
    reason=reason@entry=0x702f7d "Nursery full",
    is_overflow=is_overflow@entry=0, finish_up_concurrent_mark=0,
    unpin_queue=0x0) at sgen-gc.c:1563
#8  0x0000000000607f5c in major_start_concurrent_collection (
    reason=<optimized out>) at sgen-gc.c:2121
---Type <return> to continue, or q <return> to quit---
#9  sgen_perform_collection (requested_size=4096, generation_to_collect=0,
reason=0x702f7d "Nursery full", wait_to_finish=0, stw=1) at sgen-gc.c:2277
#10 0x000000000060823c in sgen_ensure_free_space (size=<optimized out>,
    generation=<optimized out>) at sgen-gc.c:2232
#11 0x00000000005fb80d in sgen_alloc_obj_nolock (
vtable=vtable@entry=0x18068a8, size=80, size@entry=76) at sgen-alloc.c:262
#12 0x00000000005e8da1 in mono_gc_alloc_string (vtable=vtable("string"),
    size=size@entry=76, len=len@entry=27) at sgen-mono.c:1833
#13 0x00000000005c5025 in mono_string_new_size_checked (domain=0x17d93e0,
    len=len@entry=27, error=error@entry=0x7f3a15048cc0) at object.c:6074
#14 0x0000000000597899 in ves_icall_System_String_InternalAllocateStr (
    length=27) at string-icalls.c:41
#15 0x00000000419fded2 in ?? ()
#16 0x00007f3a1eb14f10 in ?? ()
#17 0x00007f38044a3150 in ?? ()
#18 0x0000000000000000 in ?? ()


After having a closer look at the memory allocation we came across this:


$ wc -l /proc/44969/maps
65532 /proc/44969/maps
$ sysctl vm.max_map_count
vm.max_map_count = 65530

(44969 being the PID of the mono process)


Apparently the process is not able to mmap another chunk of memory. I'll test this by modifying the the sysctl value and do another run.


Does the 4.6 release introduces any changes to the memory layer, e.g. using a fine granularity for memory management (resulting in more mmap'ed chunks)?


Regards,

Burkhard

_______________________________________________
Mono-devel-list mailing list
Mono-devel-list@lists.dot.net
http://lists.dot.net/mailman/listinfo/mono-devel-list

Reply via email to