Did you alter the amount of memory assigned to the zone at some point? And if yes how did you do that?
I have seen the same behavior after changing the amount of memory (virtual RAM) to an LX zone. You need to do more than just assign more “memory”. https://www.mail-archive.com/[email protected]/msg02992.html Van: Jerry Jelinek [mailto:[email protected]] Verzonden: donderdag 22 juni 2017 14:54 Aan: [email protected] Onderwerp: Re: [smartos-discuss] Problems with low memory (lx zones) There seem to be several things here that you've mentioned. 1) In your zone you are trying to use a lot more physical memory than the limit you have set for the zone. The overall thrashing behavior you have described sounds like what would be expected in this case. 2) The process eventually terminates with a SIGBUS. I don't know if this is an issue with your application code or with our platform. You could try strace-ing the app and I'd be happy to look at the trace. If there is a core dump which has useful information (i.e. it has symbols and stack frames) then I could also take a look at that. 3) The box eventually locks up. That is clearly our issue and is something we would want to investigate. Can you force a system dump and provide that to us? If you can't NMI your box when it is in this state, then you might be able to force a dump using DTrace. Thanks, Jerry On Wed, Jun 21, 2017 at 9:57 PM, David Preece <[email protected] <mailto:[email protected]> > wrote: Hi, For some reason I thought I had this nailed but ... obviously not. I have a 16GB physical machine with 64GB swap on "spinning" hard drives. I build an LX zone with a physical limit of 4GB and 8GB swap. If I log into my zone and allocate "too much" memory ( python3 -c "bytearray(8*1024*1024*1024)" ) then the swap starts thrashing and vfsstat reports long write times for the zone (of the order 2000us). After a couple of minutes the process closes due to signal 7. However, if another zone does the same thing then the global zone seems to bear the brunt of it. Write latencies go up to, in some cases, an entire second ... vmstat reports all the threads waiting, the free space scanner going mad, and eventually the box locks up solid. This happens regardless of whether or not rcap is running. I'm pretty sure this is not the ideal result - can anyone shed light on what's going on and how I might prevent it? Cheers, Dave smartos-discuss | <https://www.listbox.com/member/archive/184463/=now> Archives <https://www.listbox.com/member/archive/rss/184463/27966043-4eecc9e4> | <https://www.listbox.com/member/?&> Modify Your Subscription <http://www.listbox.com> ------------------------------------------- smartos-discuss Archives: https://www.listbox.com/member/archive/184463/=now RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00 Modify Your Subscription: https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb Powered by Listbox: http://www.listbox.com
