Thanks for your precisions Josh, you are right the core issue is lack of
memory on the host itself probably but if the 1GB is not locked (if I
understand what you said the zone will just grab more as it needs it while
staying below this cap) do you have an idea on why my initial scenario
failed ?

My initial expectation was that creating 4 VMs with 2.5GB would leave 6GB
for the host + zfs but in this scenario the last VM to boot ends but in a
zombie state, I had the exact same behavior on the two identical hosts I
provisioned at the same time. Even taking into account the 1GB extension I
would expect to have 2 GB still left for the host.

That's not a major issue but I am just trying to understand why it works
that way, I will try to reproduce it and have a look at the logfile pointed
above.

On 8 July 2015 at 19:17, Josh Wilsdon <[email protected]> wrote:

>
> thanks for your all your advices.
>>
>> That's too bad the memory allocated for the zone hosting the qemu process
>> cannot be adjusted, If I create for example a 510MB VM I don't see why the
>> zone hosting the qemu process should get 1GB, I understand it needs some
>> memory of its own to run though but in our case 1GB just seems... excessive
>> :(
>>
>
> I was away and did not see this thread until now.
>
> It should be possible to adjust this value.
>
> As it says in the man page, the default value for max_physical_memory for
> KVM is ram + 1024. That's where the additional GB is coming from, so
> changing this value when provisioning or before booting your zone should
> change this "overhead". If you want to try using a lower value you can feel
> free but last time I looked at this anything less than ram + 256 and your
> zone may not boot. Also, if your zone doesn't boot you'll want to raise
> this to 1024 before reporting it as a new issue.
>
> All that said, if you're provisioning using `vmadm create` and on a recent
> SmartOS the overhead value here is very unlikely to be your problem. So I'd
> recommend not changing this value and instead doing one or more of:
> lowering the "ram" parameter for these VMs, adding more physical DRAM, or
> creating fewer of these VMs on a single host.
>
> This 1024 that we're adding to max_physical_memory should not fail a new
> provision unless there really isn't memory available. That memory is not
> *reserved* as far as the system is concerned (unlike the memory specified
> by "ram" which is locked), it's just a cap on the amount that the qemu
> process itself can consume. So setting this lower will only mean you're
> more likely for the qemu process to run out of memory and die.
>
> Thanks,
> Josh
> *smartos-discuss* | Archives
> <https://www.listbox.com/member/archive/184463/=now>
> <https://www.listbox.com/member/archive/rss/184463/27127964-b8d97130> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to