Thanks for the report, but there's nothing to be done here (or to be concerned about). That test allocates a contiguous 2GB chunk of memory on x64 (1GB on ia32). If it can't get that, it'll crash. Try running it in isolation (just copy-paste the command from the line starting with "Command: ...") and it should pass -- assuming your machine 2GB of free memory at that point.
The reason it "started failing" on the 3.21 branch is that it has been backmerged to that branch along with the bug fix it verifies. We have since added infrastructure that allows us to run an equivalent test without requiring so much memory, but there's no reason to backmerge such a patch to the stable branch. On Sat, Dec 7, 2013 at 5:21 AM, Anatol Pomozov <[email protected]>wrote: > And it looks like the crash comes from Linux OOM killer. It is what dmesg > says: > > > [32497.116849] d8 invoked oom-killer: gfp_mask=0x280da, order=0, > oom_score_adj=0 > [32497.116853] d8 cpuset=/ mems_allowed=0 > [32497.116856] CPU: 1 PID: 4570 Comm: d8 Tainted: G W 3.12.3-1-ARCH #1 > [32497.116858] Hardware name: To Be Filled By O.E.M. To Be Filled By > O.E.M./H61M/U3S3, BIOS P2.20 07/30/2012 > [32497.116860] 0000000000000000 ffff88007a2bdaa0 ffffffff814ee3db > ffff880118ee56e0 > [32497.116863] ffff88007a2bdb30 ffffffff814ec36e ffff88007a2bdac0 > ffffffff81062d66 > [32497.116865] ffff88007a2bdb08 ffffffff810f3354 0000000000000141 > ffff88011f5f7b38 > [32497.116868] Call Trace: > [32497.116876] [<ffffffff814ee3db>] dump_stack+0x54/0x8d > [32497.116878] [<ffffffff814ec36e>] dump_header+0x7f/0x200 > [32497.116883] [<ffffffff81062d66>] ? put_online_cpus+0x56/0x80 > [32497.116886] [<ffffffff810f3354>] ? rcu_oom_notify+0xe4/0x100 > [32497.116890] [<ffffffff81138dd6>] oom_kill_process+0x206/0x390 > [32497.116892] [<ffffffff81139557>] out_of_memory+0x437/0x480 > [32497.116896] [<ffffffff8113f3e9>] __alloc_pages_nodemask+0xad9/0xaf0 > [32497.116900] [<ffffffff8118065a>] alloc_pages_vma+0x9a/0x190 > [32497.116904] [<ffffffff81161c2b>] handle_mm_fault+0xedb/0x10f0 > [32497.116907] [<ffffffff814f8c59>] __do_page_fault+0x1e9/0x5f0 > [32497.116910] [<ffffffff814f5ae6>] ? retint_kernel+0x26/0x30 > [32497.116913] [<ffffffff814f8d62>] ? __do_page_fault+0x2f2/0x5f0 > [32497.116915] [<ffffffff814f906e>] do_page_fault+0xe/0x10 > [32497.116917] [<ffffffff814f5c88>] page_fault+0x28/0x30 > [32497.137319] [24061] 1000 24061 84273 10894 70 0 0 python2 > [32497.137321] [24065] 1000 24065 26933 7127 55 0 0 python2 > [32497.137323] [24066] 1000 24066 26678 7062 54 0 0 python2 > [32497.137325] [ 4570] 1000 4570 682632 481958 981 0 0 d8 > [32497.137327] [ 4574] 1000 4574 682632 342254 708 0 0 d8 > [32497.137329] Out of memory: Kill process 4570 (d8) score 514 or > sacrifice child > [32497.137363] Killed process 4570 (d8) total-vm:2730528kB, > anon-rss:1927664kB, file-rss:168kB > [33336.362402] d8 invoked oom-killer: gfp_mask=0x280da, order=0, > oom_score_adj=0 > [33336.362407] d8 cpuset=/ mems_allowed=0 > > > So it looks like a memory leak in test or in V8 code itself. > > -- > -- > v8-users mailing list > [email protected] > http://groups.google.com/group/v8-users > --- > You received this message because you are subscribed to the Google Groups > "v8-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/groups/opt_out. > -- -- v8-users mailing list [email protected] http://groups.google.com/group/v8-users --- You received this message because you are subscribed to the Google Groups "v8-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/groups/opt_out.
