Re: [v8-users] Cryptic out-of-memory error
V8 suffered from a virtual address space leak which was fixed and backmerged to 57 (https://bugs.chromium.org/p/v8/issues/detail?id=5945). Awesome it works for you now. On Thu, May 11, 2017 at 7:04 PM Andre Cunhawrote: > I have repeated the tests in V8 5.8.283.38, and indeed the problem is > gone. The amount of virtual memory remains stable over time. > > With regard to the cause of the problem, I managed to create a similar > situation (increase in virtual memory consumption without increase in > actual memory usage) using a loop like this: > > while (true) { > usleep(100); > sbrk(4096 * 40); > } > > I would guess that, in version 5.6, the program break of the process was > being increased when an Isolate was allocated, some allocated pages were > not being used, but the program break wasn't being decreased when > Isolate::Dispose() was called. The memory the Isolate used to occupy was > nonetheless marked free, and thus reused in subsequent allocations, but the > allocation process would still increase the program break anyway. Since > these extra pages were never referenced, no actual memory was allocated to > the process, but the program break reached its limit at some point. That > could explain the situation, but it's just a wild guess, and the problem is > solved in 5.8 anyway. > > Thank you for the support. > Andre > > On Thursday, May 11, 2017 at 10:45:30 AM UTC-3, Jakob Kummerow wrote: >> >> On Thu, May 11, 2017 at 3:38 PM, Jochen Eisinger >> wrote: >> >>> Thank you for the detailed bug report. >>> >>> I tried reproducing this on the latest version of V8, but couldn't >>> observe the behavior you described. >>> >>> Have you considered updating to at least the latest stable version of V8? >>> >> >> ...which would be branch-heads/5.8 (currently 5.8.283.38) >> >> > >>> On Wed, May 10, 2017 at 7:50 PM Andre Cunha >>> wrote: >>> I've managed to reproduce the problem using just V8's hello_world example (source code attached). I just added a loop around the creation and destruction of the Isolate (this is what happens in each cycle of my stress test). When I run the process and monitor it in "top", the RES column stays constant at around 26 MB, but the VIRT column grows indefinitely; after about 7 minutes, the VIRT column reaches around 33 GB, and the process crashes (the value of "CommitLimit" in my machine, got from /proc/meminfo, is 35,511,816 kB). Following Michael's suggestion, I changed file src/heap/spaces.cc so that it prints a stack trace when it's about to return NULL. I'm also sending the stack trace attached. I use V8 5.6.326.42 in Fedora 25, x86_64. Just to explain why I'm doing this test: in the library I'm working on, the user can create a certain kind of thread and send requests to it. Each thread needs to run JS code (received from the user), so it creates its own Isolate when it needs to, and destroys it when the Isolate is no longer necessary. One of our stress tests involves the constant creation and destruction of such threads, as well as constantly sending requests to the same thread. It was in this context that I found this problem. On Monday, May 8, 2017 at 12:50:37 PM UTC-3, Andre Cunha wrote: > > @Michael Lippautz, I'll try adding a breakpoint if AllocateChunk > returns NULL; hopefully, I'll get more information about the problem. > > @Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate > after using it. I'll also observe the VIRT column and see if it shows any > abnormality. > > Thank you! > > On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote: >> >> My guess would be an address space leak (should show up in the "VIRT" >> column of "top" on Linux). Are you calling "isolate->Dispose()" on any >> isolate you're done with? >> >> On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz < >> mlip...@chromium.org> wrote: >> >>> V8 usually fails there if it cannot allocate a 512KiB page from the >>> operating system/ >>> >>> You could try hooking in AllocateChunk [1] and see why it is >>> returning NULL and trace back through the underlying calls. >>> >>> Best, Michael >>> >>> [1]: >>> https://cs.chromium.org/chromium/src/v8/src/heap/spaces.cc?q=AllocateChunk=package:chromium=739 >>> >>> On Mon, May 8, 2017 at 3:27 PM Andre Cunha >>> wrote: >>> Hello, I have embedded v8 into a project for the company I work for, and during some stress tests, I've encountered a weird out-of-memory error. After considerable investigation, I still have no idea of what might be going on, so I'm reaching out to you in hope of some insight. So here is a summary of the
Re: [v8-users] Cryptic out-of-memory error
I have repeated the tests in V8 5.8.283.38, and indeed the problem is gone. The amount of virtual memory remains stable over time. With regard to the cause of the problem, I managed to create a similar situation (increase in virtual memory consumption without increase in actual memory usage) using a loop like this: while (true) { usleep(100); sbrk(4096 * 40); } I would guess that, in version 5.6, the program break of the process was being increased when an Isolate was allocated, some allocated pages were not being used, but the program break wasn't being decreased when Isolate::Dispose() was called. The memory the Isolate used to occupy was nonetheless marked free, and thus reused in subsequent allocations, but the allocation process would still increase the program break anyway. Since these extra pages were never referenced, no actual memory was allocated to the process, but the program break reached its limit at some point. That could explain the situation, but it's just a wild guess, and the problem is solved in 5.8 anyway. Thank you for the support. Andre On Thursday, May 11, 2017 at 10:45:30 AM UTC-3, Jakob Kummerow wrote: > > On Thu, May 11, 2017 at 3:38 PM, Jochen Eisinger> wrote: > >> Thank you for the detailed bug report. >> >> I tried reproducing this on the latest version of V8, but couldn't >> observe the behavior you described. >> >> Have you considered updating to at least the latest stable version of V8? >> > > ...which would be branch-heads/5.8 (currently 5.8.283.38) > > >> >> On Wed, May 10, 2017 at 7:50 PM Andre Cunha > > wrote: >> >>> I've managed to reproduce the problem using just V8's hello_world >>> example (source code attached). I just added a loop around the creation and >>> destruction of the Isolate (this is what happens in each cycle of my stress >>> test). When I run the process and monitor it in "top", the RES column stays >>> constant at around 26 MB, but the VIRT column grows indefinitely; after >>> about 7 minutes, the VIRT column reaches around 33 GB, and the process >>> crashes (the value of "CommitLimit" in my machine, got from /proc/meminfo, >>> is 35,511,816 kB). >>> >>> Following Michael's suggestion, I changed file src/heap/spaces.cc so >>> that it prints a stack trace when it's about to return NULL. I'm also >>> sending the stack trace attached. I use V8 5.6.326.42 in Fedora 25, x86_64. >>> >>> Just to explain why I'm doing this test: in the library I'm working on, >>> the user can create a certain kind of thread and send requests to it. Each >>> thread needs to run JS code (received from the user), so it creates its own >>> Isolate when it needs to, and destroys it when the Isolate is no longer >>> necessary. One of our stress tests involves the constant creation and >>> destruction of such threads, as well as constantly sending requests to the >>> same thread. It was in this context that I found this problem. >>> >>> On Monday, May 8, 2017 at 12:50:37 PM UTC-3, Andre Cunha wrote: @Michael Lippautz, I'll try adding a breakpoint if AllocateChunk returns NULL; hopefully, I'll get more information about the problem. @Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate after using it. I'll also observe the VIRT column and see if it shows any abnormality. Thank you! On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote: > > My guess would be an address space leak (should show up in the "VIRT" > column of "top" on Linux). Are you calling "isolate->Dispose()" on any > isolate you're done with? > > On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz > wrote: > >> V8 usually fails there if it cannot allocate a 512KiB page from the >> operating system/ >> >> You could try hooking in AllocateChunk [1] and see why it is >> returning NULL and trace back through the underlying calls. >> >> Best, Michael >> >> [1]: >> https://cs.chromium.org/chromium/src/v8/src/heap/spaces.cc?q=AllocateChunk=package:chromium=739 >> >> On Mon, May 8, 2017 at 3:27 PM Andre Cunha >> wrote: >> >>> Hello, >>> >>> I have embedded v8 into a project for the company I work for, and >>> during some stress tests, I've encountered a weird out-of-memory error. >>> After considerable investigation, I still have no idea of what might be >>> going on, so I'm reaching out to you in hope of some insight. >>> >>> So here is a summary of the scenario: in each test iteration, I >>> create an Isolate, run some short JS code fragments, and then destroy >>> the >>> isolate. After the execution of each code fragment, I perform some >>> variable >>> manipulations from my C++ code using V8's API, prior to running the >>> next >>> fragment. I
Re: [v8-users] Cryptic out-of-memory error
On Thu, May 11, 2017 at 3:38 PM, Jochen Eisingerwrote: > Thank you for the detailed bug report. > > I tried reproducing this on the latest version of V8, but couldn't observe > the behavior you described. > > Have you considered updating to at least the latest stable version of V8? > ...which would be branch-heads/5.8 (currently 5.8.283.38) > > On Wed, May 10, 2017 at 7:50 PM Andre Cunha > wrote: > >> I've managed to reproduce the problem using just V8's hello_world example >> (source code attached). I just added a loop around the creation and >> destruction of the Isolate (this is what happens in each cycle of my stress >> test). When I run the process and monitor it in "top", the RES column stays >> constant at around 26 MB, but the VIRT column grows indefinitely; after >> about 7 minutes, the VIRT column reaches around 33 GB, and the process >> crashes (the value of "CommitLimit" in my machine, got from /proc/meminfo, >> is 35,511,816 kB). >> >> Following Michael's suggestion, I changed file src/heap/spaces.cc so that >> it prints a stack trace when it's about to return NULL. I'm also sending >> the stack trace attached. I use V8 5.6.326.42 in Fedora 25, x86_64. >> >> Just to explain why I'm doing this test: in the library I'm working on, >> the user can create a certain kind of thread and send requests to it. Each >> thread needs to run JS code (received from the user), so it creates its own >> Isolate when it needs to, and destroys it when the Isolate is no longer >> necessary. One of our stress tests involves the constant creation and >> destruction of such threads, as well as constantly sending requests to the >> same thread. It was in this context that I found this problem. >> >> On Monday, May 8, 2017 at 12:50:37 PM UTC-3, Andre Cunha wrote: >>> >>> @Michael Lippautz, I'll try adding a breakpoint if AllocateChunk returns >>> NULL; hopefully, I'll get more information about the problem. >>> >>> @Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate >>> after using it. I'll also observe the VIRT column and see if it shows any >>> abnormality. >>> >>> Thank you! >>> >>> On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote: My guess would be an address space leak (should show up in the "VIRT" column of "top" on Linux). Are you calling "isolate->Dispose()" on any isolate you're done with? On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz wrote: > V8 usually fails there if it cannot allocate a 512KiB page from the > operating system/ > > You could try hooking in AllocateChunk [1] and see why it is returning > NULL and trace back through the underlying calls. > > Best, Michael > > [1]: https://cs.chromium.org/chromium/src/v8/src/heap/ > spaces.cc?q=AllocateChunk=package:chromium=739 > > On Mon, May 8, 2017 at 3:27 PM Andre Cunha > wrote: > >> Hello, >> >> I have embedded v8 into a project for the company I work for, and >> during some stress tests, I've encountered a weird out-of-memory error. >> After considerable investigation, I still have no idea of what might be >> going on, so I'm reaching out to you in hope of some insight. >> >> So here is a summary of the scenario: in each test iteration, I >> create an Isolate, run some short JS code fragments, and then destroy the >> isolate. After the execution of each code fragment, I perform some >> variable >> manipulations from my C++ code using V8's API, prior to running the next >> fragment. I repeat thousands of such iterations over the same input (it's >> valid), and I expect no memory leaks and no crashes. However, after >> about 3 >> hours, V8 crashes with an out-of-memory error of no apparent reason. >> >> I have run the code though valgrind and using address sanitizing, and >> no memory leaks were detected. Additionally, I monitor memory consumption >> throughout the test; the program's memory usage is stable, without any >> peak, and when V8 crashes the system has a lot of available memory (more >> than 5 Gib). I have used V8's API to get heap usage statistics after each >> successful iteration; the values are always the same, and are shown below >> (they are included in an attached file, typical_memory.txt): >> >> ScriptEngine::Run: finished running at 2017-05-05T13:20:34 >> used_heap_size : 46.9189 Mib >> total_heap_size : 66.1562 Mib >> Space 0 >> name : new_space >> size : 8 Mib >> used_size : 2.47314 Mib >> available_size : 5.39404 Mib >> Space 1 >> name : old_space >> size : 39.5625 Mib >> used_size : 31.6393 Mib >> available_size :
Re: [v8-users] Cryptic out-of-memory error
Thank you for the detailed bug report. I tried reproducing this on the latest version of V8, but couldn't observe the behavior you described. Have you considered updating to at least the latest stable version of V8? On Wed, May 10, 2017 at 7:50 PM Andre Cunhawrote: > I've managed to reproduce the problem using just V8's hello_world example > (source code attached). I just added a loop around the creation and > destruction of the Isolate (this is what happens in each cycle of my stress > test). When I run the process and monitor it in "top", the RES column stays > constant at around 26 MB, but the VIRT column grows indefinitely; after > about 7 minutes, the VIRT column reaches around 33 GB, and the process > crashes (the value of "CommitLimit" in my machine, got from /proc/meminfo, > is 35,511,816 kB). > > Following Michael's suggestion, I changed file src/heap/spaces.cc so that > it prints a stack trace when it's about to return NULL. I'm also sending > the stack trace attached. I use V8 5.6.326.42 in Fedora 25, x86_64. > > Just to explain why I'm doing this test: in the library I'm working on, > the user can create a certain kind of thread and send requests to it. Each > thread needs to run JS code (received from the user), so it creates its own > Isolate when it needs to, and destroys it when the Isolate is no longer > necessary. One of our stress tests involves the constant creation and > destruction of such threads, as well as constantly sending requests to the > same thread. It was in this context that I found this problem. > > On Monday, May 8, 2017 at 12:50:37 PM UTC-3, Andre Cunha wrote: >> >> @Michael Lippautz, I'll try adding a breakpoint if AllocateChunk returns >> NULL; hopefully, I'll get more information about the problem. >> >> @Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate >> after using it. I'll also observe the VIRT column and see if it shows any >> abnormality. >> >> Thank you! >> >> On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote: >>> >>> My guess would be an address space leak (should show up in the "VIRT" >>> column of "top" on Linux). Are you calling "isolate->Dispose()" on any >>> isolate you're done with? >>> >>> On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz >>> wrote: >>> V8 usually fails there if it cannot allocate a 512KiB page from the operating system/ You could try hooking in AllocateChunk [1] and see why it is returning NULL and trace back through the underlying calls. Best, Michael [1]: https://cs.chromium.org/chromium/src/v8/src/heap/spaces.cc?q=AllocateChunk=package:chromium=739 On Mon, May 8, 2017 at 3:27 PM Andre Cunha wrote: > Hello, > > I have embedded v8 into a project for the company I work for, and > during some stress tests, I've encountered a weird out-of-memory error. > After considerable investigation, I still have no idea of what might be > going on, so I'm reaching out to you in hope of some insight. > > So here is a summary of the scenario: in each test iteration, I create > an Isolate, run some short JS code fragments, and then destroy the > isolate. > After the execution of each code fragment, I perform some variable > manipulations from my C++ code using V8's API, prior to running the next > fragment. I repeat thousands of such iterations over the same input (it's > valid), and I expect no memory leaks and no crashes. However, after about > 3 > hours, V8 crashes with an out-of-memory error of no apparent reason. > > I have run the code though valgrind and using address sanitizing, and > no memory leaks were detected. Additionally, I monitor memory consumption > throughout the test; the program's memory usage is stable, without any > peak, and when V8 crashes the system has a lot of available memory (more > than 5 Gib). I have used V8's API to get heap usage statistics after each > successful iteration; the values are always the same, and are shown below > (they are included in an attached file, typical_memory.txt): > > ScriptEngine::Run: finished running at 2017-05-05T13:20:34 > used_heap_size : 46.9189 Mib > total_heap_size : 66.1562 Mib > Space 0 > name : new_space > size : 8 Mib > used_size : 2.47314 Mib > available_size : 5.39404 Mib > Space 1 > name : old_space > size : 39.5625 Mib > used_size : 31.6393 Mib > available_size : 5.51526 Mib > Space 2 > name : code_space > size : 10.4375 Mib > used_size : 6.16919 Mib > available_size : 0 B > Space 3 > name : map_space >
Re: [v8-users] V8 assertion timezone.js - difference between UTC and Etc/UTC
Upgrading to ICU 59 is something that's in progress upstream. Those particular issues are addressed by recent or out-for-review patches: - https://chromium-review.googlesource.com/c/499609/2/src/intl.cc - https://chromium-review.googlesource.com/c/496406/ Do things work for you if you patch those in locally? Dan On Wed, May 10, 2017 at 8:36 PM,wrote: > I'm packaging V8 5.9.116.17 on Arch Linux using system installation of ICU > 59.1. > > Everything seems compatible apart the fact that two functions u_strToUpper > and u_strToLower now are in ustring.h, so I added the header to i18n.cc > > --- i18n.cc 2017-05-10 11:53:57.215319733 +0200 > +++ i18n_patched.cc 2017-05-10 11:53:50.241855309 +0200 > @@ -29,6 +29,7 @@ > #include "unicode/smpdtfmt.h" > #include "unicode/timezone.h" > #include "unicode/uchar.h" > +#include "unicode/ustring.h" > #include "unicode/ucol.h" > #include "unicode/ucurr.h" > #include "unicode/unum.h" > > Build is fine if warnings are not considered errors. > > Then i run the checks like so: > tools/run-tests.py --no-presubmit --outdir=out.gn --buildbot --arch=x64 > --mode=Release > > One assert in timezone.js > (https://chromium.googlesource.com/v8/v8.git/+/5.9-lkgr/test/intl/date-format/timezone.js) > fails saying that Etc/UTC is found instead of UTC. Shouldn't be UTC a > shortcut to Etc/UTC ? Is the assert wrong or I have to configure ICU 59.1 to > a specific behavior ? > > Thank you, the assertion error is below. > > === intl/date-format/timezone === > /home/marcs/DevLab/aur/v8/src/v8/test/intl/assert.js:105: Error: Failure: > expected , found . > throw new Error(message); > ^ > Error: Failure: expected , found . > at fail (/home/marcs/DevLab/aur/v8/src/v8/test/intl/assert.js:105:9) > at assertEquals > (/home/marcs/DevLab/aur/v8/src/v8/test/intl/assert.js:114:5) > at > /home/marcs/DevLab/aur/v8/src/v8/test/intl/date-format/timezone.js:38:1 > Command: /home/marcs/DevLab/aur/v8/src/v8/out.gn/Release/d8 --test > --random-seed=937151913 --no-turbo --allow-natives-syntax --nohard-abort > --nodead-code-elimination --nofold-constants > /home/marcs/DevLab/aur/v8/src/v8/test/intl/assert.js > /home/marcs/DevLab/aur/v8/src/v8/test/intl/utils.js > /home/marcs/DevLab/aur/v8/src/v8/test/intl/regexp-prepare.js > /home/marcs/DevLab/aur/v8/src/v8/test/intl/date-format/timezone.js > /home/marcs/DevLab/aur/v8/src/v8/test/intl/regexp-assert.js > === intl/date-format/timezone === > /home/marcs/DevLab/aur/v8/src/v8/test/intl/assert.js:105: Error: Failure: > expected , found . > throw new Error(message); > ^ > Error: Failure: expected , found . > at fail (/home/marcs/DevLab/aur/v8/src/v8/test/intl/assert.js:105:9) > at assertEquals > (/home/marcs/DevLab/aur/v8/src/v8/test/intl/assert.js:114:5) > at > /home/marcs/DevLab/aur/v8/src/v8/test/intl/date-format/timezone.js:38:1 > Command: /home/marcs/DevLab/aur/v8/src/v8/out.gn/Release/d8 --test > --random-seed=937151913 --allow-natives-syntax --nohard-abort > --nodead-code-elimination --nofold-constants > /home/marcs/DevLab/aur/v8/src/v8/test/intl/assert.js > /home/marcs/DevLab/aur/v8/src/v8/test/intl/utils.js > /home/marcs/DevLab/aur/v8/src/v8/test/intl/regexp-prepare.js > /home/marcs/DevLab/aur/v8/src/v8/test/intl/date-format/timezone.js > /home/marcs/DevLab/aur/v8/src/v8/test/intl/regexp-assert.js > > -- > -- > v8-users mailing list > v8-users@googlegroups.com > http://groups.google.com/group/v8-users > --- > You received this message because you are subscribed to the Google Groups > "v8-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to v8-users+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -- -- v8-users mailing list v8-users@googlegroups.com http://groups.google.com/group/v8-users --- You received this message because you are subscribed to the Google Groups "v8-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to v8-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[v8-users] How to pass the second parameter of ObjectTemplate::New in Google V8?
I know creating an ObjectTemplate and we can do several things to it. But my question is not about those well-known things. I want to know how to pass the second parameter. As the official guide said: Each function template has an associated object template. This is used to configure objects created with this function as their constructor. And the second parameter of ObjectTemplate::New is a constructor typed by FunctionTemplate. static Local New(Isolate *isolate, Local constructor = Local()); That means something like this: void Constructor(const FunctionCallbackInfo& args){ // ...} Local _constructor = FunctionTemplate::New(isolate, Constructor);Local tpl = ObjectTemplate::New(isolate, _constructor); Who can give me a demo that how to implement the Constructor function. I tried this, but failed: void Constructor(const FunctionCallbackInfo& args){ Isolate* isolate = args.GetIsolate(); args.This()->Set(String::NewFromUtf8(isolate, "value"), Number::New(isolate, 233)); args.GetReturnValue().Set(args.This());} By the way, I know the use case of accessors and so on, I just want to know how to use the second parameter. -- -- v8-users mailing list v8-users@googlegroups.com http://groups.google.com/group/v8-users --- You received this message because you are subscribed to the Google Groups "v8-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to v8-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.