Thank you for the detailed bug report.

I tried reproducing this on the latest version of V8, but couldn't observe
the behavior you described.

Have you considered updating to at least the latest stable version of V8?

On Wed, May 10, 2017 at 7:50 PM Andre Cunha <andre.lv.cu...@gmail.com>
wrote:

> I've managed to reproduce the problem using just V8's hello_world example
> (source code attached). I just added a loop around the creation and
> destruction of the Isolate (this is what happens in each cycle of my stress
> test). When I run the process and monitor it in "top", the RES column stays
> constant at around 26 MB, but the VIRT column grows indefinitely; after
> about 7 minutes, the VIRT column reaches around 33 GB, and the process
> crashes (the value of "CommitLimit" in my machine, got from /proc/meminfo,
> is 35,511,816 kB).
>
> Following Michael's suggestion, I changed file src/heap/spaces.cc so that
> it prints a stack trace when it's about to return NULL. I'm also sending
> the stack trace attached. I use V8 5.6.326.42 in Fedora 25, x86_64.
>
> Just to explain why I'm doing this test: in the library I'm working on,
> the user can create a certain kind of thread and send requests to it. Each
> thread needs to run JS code (received from the user), so it creates its own
> Isolate when it needs to, and destroys it when the Isolate is no longer
> necessary. One of our stress tests involves the constant creation and
> destruction of such threads, as well as constantly sending requests to the
> same thread. It was in this context that I found this problem.
>
> On Monday, May 8, 2017 at 12:50:37 PM UTC-3, Andre Cunha wrote:
>>
>> @Michael Lippautz, I'll try adding a breakpoint if AllocateChunk returns
>> NULL; hopefully, I'll get more information about the problem.
>>
>> @Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate
>> after using it. I'll also observe the VIRT column and see if it shows any
>> abnormality.
>>
>> Thank you!
>>
>> On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote:
>>>
>>> My guess would be an address space leak (should show up in the "VIRT"
>>> column of "top" on Linux). Are you calling "isolate->Dispose()" on any
>>> isolate you're done with?
>>>
>>> On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz <mlip...@chromium.org>
>>> wrote:
>>>
>>>> V8 usually fails there if it cannot allocate a 512KiB page from the
>>>> operating system/
>>>>
>>>> You could try hooking in AllocateChunk [1] and see why it is returning
>>>> NULL and trace back through the underlying calls.
>>>>
>>>> Best, Michael
>>>>
>>>> [1]:
>>>> https://cs.chromium.org/chromium/src/v8/src/heap/spaces.cc?q=AllocateChunk&sq=package:chromium&l=739
>>>>
>>>> On Mon, May 8, 2017 at 3:27 PM Andre Cunha <andre.l...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I have embedded v8 into a project for the company I work for, and
>>>>> during some stress tests, I've encountered a weird out-of-memory error.
>>>>> After considerable investigation, I still have no idea of what might be
>>>>> going on, so I'm reaching out to you in hope of some insight.
>>>>>
>>>>> So here is a summary of the scenario: in each test iteration, I create
>>>>> an Isolate, run some short JS code fragments, and then destroy the 
>>>>> isolate.
>>>>> After the execution of each code fragment, I perform some variable
>>>>> manipulations from my C++ code using V8's API, prior to running the next
>>>>> fragment. I repeat thousands of such iterations over the same input (it's
>>>>> valid), and I expect no memory leaks and no crashes. However, after about 
>>>>> 3
>>>>> hours, V8 crashes with an out-of-memory error of no apparent reason.
>>>>>
>>>>> I have run the code though valgrind and using address sanitizing, and
>>>>> no memory leaks were detected. Additionally, I monitor memory consumption
>>>>> throughout the test; the program's memory usage is stable, without any
>>>>> peak, and when V8 crashes the system has a lot of available memory (more
>>>>> than 5 Gib). I have used V8's API to get heap usage statistics after each
>>>>> successful iteration; the values are always the same, and are shown below
>>>>> (they are included in an attached file, typical_memory.txt):
>>>>>
>>>>> ScriptEngine::Run: finished running at 2017-05-05T13:20:34
>>>>>   used_heap_size       : 46.9189 Mib
>>>>>   total_heap_size      : 66.1562 Mib
>>>>>   Space 0
>>>>>     name               : new_space
>>>>>     size               : 8 Mib
>>>>>     used_size          : 2.47314 Mib
>>>>>     available_size     : 5.39404 Mib
>>>>>   Space 1
>>>>>     name               : old_space
>>>>>     size               : 39.5625 Mib
>>>>>     used_size          : 31.6393 Mib
>>>>>     available_size     : 5.51526 Mib
>>>>>   Space 2
>>>>>     name               : code_space
>>>>>     size               : 10.4375 Mib
>>>>>     used_size          : 6.16919 Mib
>>>>>     available_size     : 0 B
>>>>>   Space 3
>>>>>     name               : map_space
>>>>>     size               : 8.15625 Mib
>>>>>     used_size          : 6.63733 Mib
>>>>>     available_size     : 80 B
>>>>>   Space 4
>>>>>     name               : large_object_space
>>>>>     size               : 0 B
>>>>>     used_size          : 0 B
>>>>>     available_size     : 11.1015 Gib
>>>>>
>>>>> When V8 crashes, it prints a heap summary, which I'm sending attached
>>>>> (file heap_after_error.txt). I also save a core dump. Sometimes, the
>>>>> system crashes during the creation of an Isolate; sometimes, during the
>>>>> creation of a Context; typically, it crashes during snapshot
>>>>> deserialization. However, the top of the stack is always the same, and 
>>>>> it's
>>>>> reproduced below (also included attached, file stacktrace.txt).
>>>>>
>>>>> #7  v8::internal::OS::Abort () at
>>>>> ../../src/base/platform/platform-posix.cc:230
>>>>> #8  0x00007ff15a2f922f in v8::Utils::ReportOOMFailure
>>>>> (location=0x7ff15b20f62e "Committing semi space failed.",
>>>>> is_heap_oom=false) at ../../src/api.cc:381
>>>>> #9  0x00007ff15a2f918e in v8::internal::V8::FatalProcessOutOfMemory
>>>>> (location=0x7ff15b20f62e "Committing semi space failed.",
>>>>> is_heap_oom=false) at ../../src/api.cc:352
>>>>> #10 0x00007ff15aa3fefc in
>>>>> v8::internal::Heap::EnsureFromSpaceIsCommitted (this=0x7ff12c0bdde0) at
>>>>> ../../src/heap/heap.cc:1234
>>>>> #11 0x00007ff15aa3ed34 in v8::internal::Heap::PerformGarbageCollection
>>>>> (this=0x7ff12c0bdde0, collector=v8::internal::MARK_COMPACTOR,
>>>>>     gc_callback_flags=v8::kNoGCCallbackFlags) at
>>>>> ../../src/heap/heap.cc:1308
>>>>> #12 0x00007ff15aa3e2ab in v8::internal::Heap::CollectGarbage
>>>>> (this=0x7ff12c0bdde0, collector=v8::internal::MARK_COMPACTOR,
>>>>>     gc_reason=v8::internal::GarbageCollectionReason::kDeserializer,
>>>>> collector_reason=0x7ff15b20f07a "GC in old space requested",
>>>>>     gc_callback_flags=v8::kNoGCCallbackFlags) at
>>>>> ../../src/heap/heap.cc:1002
>>>>> #13 0x00007ff15a33cdee in v8::internal::Heap::CollectGarbage
>>>>> (this=0x7ff12c0bdde0, space=v8::internal::OLD_SPACE,
>>>>>     gc_reason=v8::internal::GarbageCollectionReason::kDeserializer,
>>>>> callbackFlags=v8::kNoGCCallbackFlags) at ../../src/heap/heap-inl.h:681
>>>>> #14 0x00007ff15aa3d069 in v8::internal::Heap::CollectAllGarbage
>>>>> (this=0x7ff12c0bdde0, flags=2,
>>>>>     gc_reason=v8::internal::GarbageCollectionReason::kDeserializer,
>>>>> gc_callback_flags=v8::kNoGCCallbackFlags) at ../../src/heap/heap.cc:848
>>>>> #15 0x00007ff15aa3fe84 in v8::internal::Heap::ReserveSpace
>>>>> (this=0x7ff12c0bdde0, reservations=0x7ff148fe6078, maps=0x7ff148fe60f8) at
>>>>> ../../src/heap/heap.cc:1215
>>>>>
>>>>> In the heap summary that gets printed, I have noted some apparent
>>>>> discrepancies with the typical data I get from the API (shown above): for
>>>>> example, the summary says the size of the old space is 4067328 bytes (=
>>>>> 3.88 Mib), not the typical 39.56 Mib I get from the API.
>>>>>
>>>>> I have dived into V8 garbage collection, but still couldn't make sense
>>>>> of the error message ("Committing semi space failed"). So, I'd like to 
>>>>> know
>>>>> under which circumstances this error can happen, and how it's possible 
>>>>> that
>>>>> it only happens occasionally, given that each test iteration is identical
>>>>> to the others and there is no detectable memory leaks.
>>>>>
>>>>> If you need more information, please tell me, and I'll be glad to
>>>>> provide it.
>>>>>
>>>>> Thank you very much in advance.
>>>>> Andre
>>>>>
>>>>> --
>>>>> --
>>>>> v8-users mailing list
>>>>> v8-u...@googlegroups.com
>>>>> http://groups.google.com/group/v8-users
>>>>> ---
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "v8-users" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to v8-users+u...@googlegroups.com.
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>> --
>>>> --
>>>> v8-users mailing list
>>>> v8-u...@googlegroups.com
>>>> http://groups.google.com/group/v8-users
>>>> ---
>>>> You received this message because you are subscribed to the Google
>>>> Groups "v8-users" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to v8-users+u...@googlegroups.com.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to