[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-14 Thread Alex McWhirter
I have gone in an changed the libvirt configuration files on the cluster
nodes, which has resolved the issue for the time being. 

I can reverse one of them and post the logs to help with the issue,
hopefully tomorrow. 

On 2019-06-14 17:56, Nir Soffer wrote:

> On Fri, Jun 14, 2019 at 7:05 PM Milan Zamazal  wrote: 
> 
>> Alex McWhirter  writes:
>> 
>>> In this case, i should be able to edit /etc/libvirtd/qemu.conf on all
>>> the nodes to disable dynamic ownership as a temporary measure until
>>> this is patched for libgfapi?
>> 
>> No, other devices might have permission problems in such a case.
> 
> I wonder how libvirt can change the permissions for devices it does not know 
> about? 
> 
> When using libgfapi, we pass libivrt: 
> 
>  name='volume/----'
> type='network'>
>  transport="tcp"/>
>  transport="tcp"/>
> 
> 
> 
> So libvirt does not have the path to the file, and it cannot change the 
> permissions. 
> 
> Alex, can you reproduce this flow and attach vdsm and engine logs from all 
> hosts 
> to the bug? 
> 
> Nir 
> 
>>> On 2019-06-13 10:37, Milan Zamazal wrote:
 Shani Leviim  writes:
 
> Hi,
> It seems that you hit this bug:
> https://bugzilla.redhat.com/show_bug.cgi?id=1666795
> 
> Adding +Milan Zamazal , Can you please confirm?
 
 There may still be problems when using GlusterFS with libgfapi:
 https://bugzilla.redhat.com/1719789.
 
 What's your Vdsm version and which kind of storage do you use?
 
> *Regards,*
> 
> *Shani Leviim*
> 
> 
> On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter 
> wrote:
> 
>> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
>> images are become owned by root:root. Live migration succeeds and
>> the vm
>> stays up, but after shutting down the VM from this point, starting
>> it up
>> again will cause it to fail. At this point i have to go in and change
>> the permissions back to vdsm:kvm on the images, and the VM will boot
>> again.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/
>> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/36Z6BB5NGYEEFMPRTDYKFJVVBPZFUCBL/

  

Links:
--
[1] http://brick1.example.com
[2] http://brick2.example.com___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YFAVZUB3CTAEADS54ZD73KTFSXXBM2FB/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-14 Thread Nir Soffer
On Fri, Jun 14, 2019 at 7:05 PM Milan Zamazal  wrote:

> Alex McWhirter  writes:
>
> > In this case, i should be able to edit /etc/libvirtd/qemu.conf on all
> > the nodes to disable dynamic ownership as a temporary measure until
> > this is patched for libgfapi?
>
> No, other devices might have permission problems in such a case.
>

I wonder how libvirt can change the permissions for devices it does not
know about?

When using libgfapi, we pass libivrt:








So libvirt does not have the path to the file, and it cannot change the
permissions.

Alex, can you reproduce this flow and attach vdsm and engine logs from all
hosts
to the bug?

Nir

> On 2019-06-13 10:37, Milan Zamazal wrote:
> >> Shani Leviim  writes:
> >>
> >>> Hi,
> >>> It seems that you hit this bug:
> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1666795
> >>>
> >>> Adding +Milan Zamazal , Can you please confirm?
> >>
> >> There may still be problems when using GlusterFS with libgfapi:
> >> https://bugzilla.redhat.com/1719789.
> >>
> >> What's your Vdsm version and which kind of storage do you use?
> >>
> >>> *Regards,*
> >>>
> >>> *Shani Leviim*
> >>>
> >>>
> >>> On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter 
> >>> wrote:
> >>>
>  after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
>  images are become owned by root:root. Live migration succeeds and
>  the vm
>  stays up, but after shutting down the VM from this point, starting
>  it up
>  again will cause it to fail. At this point i have to go in and change
>  the permissions back to vdsm:kvm on the images, and the VM will boot
>  again.
>  ___
>  Users mailing list -- users@ovirt.org
>  To unsubscribe send an email to users-le...@ovirt.org
>  Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>  oVirt Code of Conduct:
>  https://www.ovirt.org/community/about/community-guidelines/
>  List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/36Z6BB5NGYEEFMPRTDYKFJVVBPZFUCBL/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AKENVDWK3VE3COWVPWWYVJBQC2CIEAAY/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-14 Thread Adrian Quintero
Thanks Gobinda,
I am in the process of finishing up the 9 node cluster, once done I will
test this ansible role...



On Fri, Jun 14, 2019 at 12:45 PM Gobinda Das  wrote:

> We have ansible role to replace gluster node.I think it works only with
> same FQDN.
> https://github.com/sac/gluster-ansible-maintenance
> I am not sure if it covers all senarios, but you can try with same FQDN.
>
> On Fri, Jun 14, 2019 at 7:13 AM Adrian Quintero 
> wrote:
>
>> Strahil,
>> Thanks for all the follow up, I will try to reproduce the same scenario
>> today, deploy a 9 node cluster, Completely kill the initiating node (vmm10)
>> and see If i can recover using the extra server approach (Different
>> IP/FQDN). If I am able to recover I will also try to test with your
>> suggested second approach (Using same IP/FQDN).
>> My objective here is to document the possible recovery scenarios without
>> any downtime or impact.
>>
>> I have documented a few setup and recovery scenarios with 6 and 9 nodes
>> already with a hyperconverged setup and I will make them available to the
>> community, hopefully this week, including the tests that you have been
>> helping me with. Hopefully this will provide help to others that are in the
>> same situation that I am, and it will also provide me with feedback from
>> more knowledgeable admins out there so that I can get this into production
>> in the near future.
>>
>>
>> Thanks again.
>>
>>
>>
>> On Wed, Jun 12, 2019 at 11:58 PM Strahil  wrote:
>>
>>> Hi Adrian,
>>>
>>> Please keep in mind that when a server dies, the easiest way to recover
>>> is to get another freshly installed server with different IP/FQDN .
>>> Then you will need to use 'replace-brick' and once gluster replaces that
>>> node - you should be able to remove the old entry in oVirt.
>>> Once the old entry is gone, you can add the new installation in oVirt
>>> via the UI.
>>>
>>> Another approach is to have the same IP/FQDN for the fresh install.In
>>> this situation, you need to have the same gluster ID (which should be a
>>> text file) and the peer IDs. Most probably you can create them on your own
>>> , based on data on the other gluster peers.
>>> Once the fresh install is available in 'gluster peer' , you can initiate
>>> a reset-brick' (don't forget to set the SELINUX , firewall and repos) and a
>>> full heal.
>>> From there you can reinstall the machine from the UI and it should be
>>> available for usage.
>>>
>>> P.S.: I know that the whole procedure is not so easy :)
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>> On Jun 12, 2019 19:02, Adrian Quintero  wrote:
>>>
>>> Strahil, I dont use the GUI that much, in this case I need to understand
>>> how all is tied together if I want to move to production. As far as Gluster
>>> goes, I can get do the administration thru CLI, however when my test
>>> environment was set up it was setup using geodeploy for Hyperconverged
>>> setup under oVirt.
>>> The initial setup was 3 servers with the same amount of physical disks:
>>> sdb, sdc, sdc, sdd, sde(this last one used for caching as it is an SSD)
>>>
>>> vmm10.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
>>> vmm10.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
>>> vmm10.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
>>> vmm10.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>>>
>>> vmm11.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
>>> vmm11.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
>>> vmm11.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
>>> vmm11.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>>>
>>> vmm12.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
>>> vmm12.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
>>> vmm12.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
>>> vmm12.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>>>
>>> *As you can see from the above the the engine volume is conformed of
>>> hosts vmm10 (Initiating cluster server but now dead sever), vmm11 and vmm12
>>> and on block device /dev/sdb (100Gb LV), also the vmstore1 volume is also
>>> on /dev/sdb (2600Gb LV).*
>>> /dev/mapper/gluster_vg_sdb-gluster_lv_engine   xfs
>>>   100G  2.0G   98G   2% /gluster_bricks/engine
>>> /dev/mapper/gluster_vg_sdb-gluster_lv_vmstore1 xfs
>>>   2.6T   35M  2.6T   1% /gluster_bricks/vmstore1
>>> /dev/mapper/gluster_vg_sdc-gluster_lv_data1xfs
>>>   2.7T  4.6G  2.7T   1% /gluster_bricks/data1
>>> /dev/mapper/gluster_vg_sdd-gluster_lv_data2xfs
>>>   2.7T  9.5G  2.7T   1% /gluster_bricks/data2
>>> vmm10.mydomain.com:/engine
>>> fuse.glusterfs  300G  9.2G  291G   4%
>>> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_engine
>>> vmm10.mydomain.com:/vmstore1
>>> fuse.glusterfs  5.1T   53G  5.1T   2%
>>> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_vmstore1
>>> vmm10.mydomain.com:/data1
>>>  fuse.glusterfs  8.0T   95G  7.9T   2%
>>> 

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-14 Thread Gobinda Das
We have ansible role to replace gluster node.I think it works only with
same FQDN.
https://github.com/sac/gluster-ansible-maintenance
I am not sure if it covers all senarios, but you can try with same FQDN.

On Fri, Jun 14, 2019 at 7:13 AM Adrian Quintero 
wrote:

> Strahil,
> Thanks for all the follow up, I will try to reproduce the same scenario
> today, deploy a 9 node cluster, Completely kill the initiating node (vmm10)
> and see If i can recover using the extra server approach (Different
> IP/FQDN). If I am able to recover I will also try to test with your
> suggested second approach (Using same IP/FQDN).
> My objective here is to document the possible recovery scenarios without
> any downtime or impact.
>
> I have documented a few setup and recovery scenarios with 6 and 9 nodes
> already with a hyperconverged setup and I will make them available to the
> community, hopefully this week, including the tests that you have been
> helping me with. Hopefully this will provide help to others that are in the
> same situation that I am, and it will also provide me with feedback from
> more knowledgeable admins out there so that I can get this into production
> in the near future.
>
>
> Thanks again.
>
>
>
> On Wed, Jun 12, 2019 at 11:58 PM Strahil  wrote:
>
>> Hi Adrian,
>>
>> Please keep in mind that when a server dies, the easiest way to recover
>> is to get another freshly installed server with different IP/FQDN .
>> Then you will need to use 'replace-brick' and once gluster replaces that
>> node - you should be able to remove the old entry in oVirt.
>> Once the old entry is gone, you can add the new installation in oVirt via
>> the UI.
>>
>> Another approach is to have the same IP/FQDN for the fresh install.In
>> this situation, you need to have the same gluster ID (which should be a
>> text file) and the peer IDs. Most probably you can create them on your own
>> , based on data on the other gluster peers.
>> Once the fresh install is available in 'gluster peer' , you can initiate
>> a reset-brick' (don't forget to set the SELINUX , firewall and repos) and a
>> full heal.
>> From there you can reinstall the machine from the UI and it should be
>> available for usage.
>>
>> P.S.: I know that the whole procedure is not so easy :)
>>
>> Best Regards,
>> Strahil Nikolov
>> On Jun 12, 2019 19:02, Adrian Quintero  wrote:
>>
>> Strahil, I dont use the GUI that much, in this case I need to understand
>> how all is tied together if I want to move to production. As far as Gluster
>> goes, I can get do the administration thru CLI, however when my test
>> environment was set up it was setup using geodeploy for Hyperconverged
>> setup under oVirt.
>> The initial setup was 3 servers with the same amount of physical disks:
>> sdb, sdc, sdc, sdd, sde(this last one used for caching as it is an SSD)
>>
>> vmm10.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
>> vmm10.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
>> vmm10.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
>> vmm10.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>>
>> vmm11.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
>> vmm11.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
>> vmm11.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
>> vmm11.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>>
>> vmm12.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
>> vmm12.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
>> vmm12.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
>> vmm12.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>>
>> *As you can see from the above the the engine volume is conformed of
>> hosts vmm10 (Initiating cluster server but now dead sever), vmm11 and vmm12
>> and on block device /dev/sdb (100Gb LV), also the vmstore1 volume is also
>> on /dev/sdb (2600Gb LV).*
>> /dev/mapper/gluster_vg_sdb-gluster_lv_engine   xfs
>>   100G  2.0G   98G   2% /gluster_bricks/engine
>> /dev/mapper/gluster_vg_sdb-gluster_lv_vmstore1 xfs
>>   2.6T   35M  2.6T   1% /gluster_bricks/vmstore1
>> /dev/mapper/gluster_vg_sdc-gluster_lv_data1xfs
>>   2.7T  4.6G  2.7T   1% /gluster_bricks/data1
>> /dev/mapper/gluster_vg_sdd-gluster_lv_data2xfs
>>   2.7T  9.5G  2.7T   1% /gluster_bricks/data2
>> vmm10.mydomain.com:/engine
>> fuse.glusterfs  300G  9.2G  291G   4%
>> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_engine
>> vmm10.mydomain.com:/vmstore1
>> fuse.glusterfs  5.1T   53G  5.1T   2%
>> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_vmstore1
>> vmm10.mydomain.com:/data1
>>  fuse.glusterfs  8.0T   95G  7.9T   2%
>> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_data1
>> vmm10.mydomain.com:/data2
>>  fuse.glusterfs  8.0T  112G  7.8T   2%
>> /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_data2
>>
>>
>>
>>
>> *before any issues I increased the size of the cluster and the gluster
>> cluster with the following, creating 4 

[ovirt-users] Re: Windows Server 2019: Driver Signature Enforcement

2019-06-14 Thread Vinícius Ferrão
Hello Vadim,

I got the working drivers from RHEL8:
eecdff62b5d148f02dc92d7115631175 virtio-win-1.9.7-rhel8.iso

Non working ones directly from the Hosted Engine:
c55e2815bc7090f077cf82aed5c90423  virtio-win-0.1.171.iso

Thanks,


> On 13 Jun 2019, at 23:33, Vadim Rozenfeld  wrote:
> 
> On Thu, 2019-06-13 at 11:36 +0300, Yedidyah Bar David wrote:
>> On Mon, Jun 10, 2019 at 10:54 PM Vinícius Ferrão > > wrote:
>>> RHV drivers works.
>>> oVirt drivers does not.
>>> 
>>> Checked this now.
>>> 
>>> I’m not sure if this is intended or not. But oVirt drivers aren’t signed 
>>> for Windows.
>> 
>> oVirt's drivers are simply copied from virtio-win repositories.
>> 
>> Adding Vadim from virtio-win team.
>> 
>> Best regards,
> 
> Hi guys,
> 
> Can you please help us to reproduce the problem?
> 
> It will be great if you can provide me with the following
> information:
> 
> - qemu and host kernel versions,
> - qemu command line,
> - RHV and oVirt drivers versions.
> 
> Cheers,
> Vadim.
> 
> 
>>  
>>> 
>>> > On 29 May 2019, at 21:41, mich...@wanderingmad.com 
>>> >  wrote:
>>> > 
>>> > I'm running server 2012R2, 2016, and 2019 with no issue using the Redhat 
>>> > signed drivers from RHEV.
>>> > ___
>>> > Users mailing list -- users@ovirt.org 
>>> > To unsubscribe send an email to users-le...@ovirt.org 
>>> > 
>>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
>>> > 
>>> > oVirt Code of Conduct: 
>>> > https://www.ovirt.org/community/about/community-guidelines/ 
>>> > 
>>> > List Archives: 
>>> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBQEMLOW5DCSW7XSLNZKNX532BQHRFUB/
>>> >  
>>> > 
>>> ___
>>> Users mailing list -- users@ovirt.org 
>>> To unsubscribe send an email to users-le...@ovirt.org 
>>> 
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
>>> 
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/ 
>>> 
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PNZWHYEWW23N4GQKHMJ2RUSQR363NSYH/
>>>  
>>> 
>> 
>> 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EQ2N7BZLZFYPSCECYDVJ2SAXZMAX2IU3/


[ovirt-users] Re: Memory ballon question

2019-06-14 Thread Strahil
Hi Martin,

Thanks for clarifying that. 

Best Regards,
Strahil NikolovOn Jun 14, 2019 13:02, Martin Sivak  wrote:
>
> Hi, 
>
> > 2019-06-13 07:11:40,973 - mom.Controllers.Balloon - INFO - Ballooning 
> > guest:node1 from 695648 to 660865 
> > 2019-06-13 07:12:51,437 - mom.GuestMonitor.Thread - INFO - 
> > GuestMonitor-node1 ending 
> > 
> > Can someone clarify what exactly does this (from  to ) mean ? 
>
> It is the ballooning operation log: 
>
> - From - how much memory was left in the VM before the action 
> - To - how much after (could be both lower and higher) 
>
> I do not remember the units, but I think it was in KiB. 
>
> Martin 
>
>
> On Thu, Jun 13, 2019 at 9:26 PM Strahil Nikolov  
> wrote: 
> > 
> > Hi Martin,Darrell, 
> > 
> > thanks for your feedback. 
> > 
> > I have checked the /var/log/vdsm/mom.log and it seems that MOM was actually 
> > working: 
> > 
> > 2019-06-13 07:08:47,690 - mom.GuestMonitor.Thread - INFO - 
> > GuestMonitor-node1 starting 
> > 2019-06-13 07:09:39,490 - mom.Controllers.Balloon - INFO - Ballooning 
> > guest:node1 from 1048576 to 996147 
> > 2019-06-13 07:09:54,658 - mom.Controllers.Balloon - INFO - Ballooning 
> > guest:node1 from 996148 to 946340 
> > 2019-06-13 07:10:09,853 - mom.Controllers.Balloon - INFO - Ballooning 
> > guest:node1 from 946340 to 899023 
> > 2019-06-13 07:10:25,053 - mom.Controllers.Balloon - INFO - Ballooning 
> > guest:node1 from 899024 to 854072 
> > 2019-06-13 07:10:40,233 - mom.Controllers.Balloon - INFO - Ballooning 
> > guest:node1 from 854072 to 811368 
> > 2019-06-13 07:10:55,428 - mom.Controllers.Balloon - INFO - Ballooning 
> > guest:node1 from 811368 to 770799 
> > 2019-06-13 07:11:10,621 - mom.Controllers.Balloon - INFO - Ballooning 
> > guest:node1 from 770800 to 732260 
> > 2019-06-13 07:11:25,827 - mom.Controllers.Balloon - INFO - Ballooning 
> > guest:node1 from 732260 to 695647 
> > 2019-06-13 07:11:40,973 - mom.Controllers.Balloon - INFO - Ballooning 
> > guest:node1 from 695648 to 660865 
> > 2019-06-13 07:12:51,437 - mom.GuestMonitor.Thread - INFO - 
> > GuestMonitor-node1 ending 
> > 
> > Can someone clarify what exactly does this (from  to ) mean ? 
> > 
> > Best Regards, 
> > Strahil Nikolov 
> > 
> > В четвъртък, 13 юни 2019 г., 17:27:01 ч. Гринуич+3, Martin Sivak 
> >  написа: 
> > 
> > 
> > Hi, 
> > 
> > iirc the guest agent is not needed anymore as we get almost the same 
> > stats from the balloon driver directly. 
> > 
> > Ballooning has to be enabled on cluster level though. So that is one 
> > thing to check. If that is fine then I guess a more detailed 
> > description is needed. 
> > 
> > oVirt generally starts ballooning when the memory load gets over 80% 
> > of available memory. 
> > 
> > The host agent that handles ballooning is called mom and the logs are 
> > located in /var/log/vdsm/mom* iirc. It might be a good idea to check 
> > whether the virtual machines were declared ready (meaning all data 
> > sources we collect provided data). 
> > 
> > -- 
> > Martin Sivak 
> > used to be maintainer of mom 
> > 
> > On Thu, Jun 13, 2019 at 12:26 AM Darrell Budic  
> > wrote: 
> > > 
> > > Do you have the overt-guest-agent running on your VMs? It’s required for 
> > > ballooning to control allocations on the guest side. 
> > > 
> > > On Jun 12, 2019, at 11:32 AM, Strahil  wrote: 
> > > 
> > > Hello All, 
> > > 
> > > as a KVM user I know how usefull is the memory balloon and how you can 
> > > both increase - and also decrease memory live (both Linux & Windows). 
> > > I have noticed that I cannot decrease the memory in oVirt. 
> > > 
> > > Does anyone got a clue why the situation is like that ? 
> > > 
> > > I was expecting that the guaranteed memory is the minimum to which the 
> > > balloon driver will not go bellow, but when I put my host under pressure 
> > > - the host just started to swap instead of reducing some of the VM memory 
> > > (and my VMs had plenty of free space). 
> > > 
> > > It will be great if oVirt can decrease the memory (if the VM has 
> > > unallocated memory) when the host is under pressure and the VM cannot be 
> > > relocated. 
> > > 
> > > Best Regards, 
> > > Strahil Nikolov 
> > > 
> > > ___ 
> > > Users mailing list -- users@ovirt.org 
> > > To unsubscribe send an email to users-le...@ovirt.org 
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> > > oVirt Code of Conduct: 
> > > https://www.ovirt.org/community/about/community-guidelines/ 
> > > List Archives: 
> > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/LUWCN2MLNTDJUEZBCTVXFMVABGPUSEFH/
> > >  
> > > 
> > > 
> > > ___ 
> > > Users mailing list -- users@ovirt.org 
> > > To unsubscribe send an email to users-le...@ovirt.org 
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> > > oVirt Code of Conduct: 
> > > 

[ovirt-users] Re: Memory ballon question

2019-06-14 Thread Martin Sivak
Hi,

> 2019-06-13 07:11:40,973 - mom.Controllers.Balloon - INFO - Ballooning 
> guest:node1 from 695648 to 660865
> 2019-06-13 07:12:51,437 - mom.GuestMonitor.Thread - INFO - GuestMonitor-node1 
> ending
>
> Can someone clarify what exactly does this (from  to ) mean ?

It is the ballooning operation log:

- From - how much memory was left in the VM before the action
- To - how much after (could be both lower and higher)

I do not remember the units, but I think it was in KiB.

Martin


On Thu, Jun 13, 2019 at 9:26 PM Strahil Nikolov  wrote:
>
> Hi Martin,Darrell,
>
> thanks for your feedback.
>
> I have checked the /var/log/vdsm/mom.log and it seems that MOM was actually 
> working:
>
> 2019-06-13 07:08:47,690 - mom.GuestMonitor.Thread - INFO - GuestMonitor-node1 
> starting
> 2019-06-13 07:09:39,490 - mom.Controllers.Balloon - INFO - Ballooning 
> guest:node1 from 1048576 to 996147
> 2019-06-13 07:09:54,658 - mom.Controllers.Balloon - INFO - Ballooning 
> guest:node1 from 996148 to 946340
> 2019-06-13 07:10:09,853 - mom.Controllers.Balloon - INFO - Ballooning 
> guest:node1 from 946340 to 899023
> 2019-06-13 07:10:25,053 - mom.Controllers.Balloon - INFO - Ballooning 
> guest:node1 from 899024 to 854072
> 2019-06-13 07:10:40,233 - mom.Controllers.Balloon - INFO - Ballooning 
> guest:node1 from 854072 to 811368
> 2019-06-13 07:10:55,428 - mom.Controllers.Balloon - INFO - Ballooning 
> guest:node1 from 811368 to 770799
> 2019-06-13 07:11:10,621 - mom.Controllers.Balloon - INFO - Ballooning 
> guest:node1 from 770800 to 732260
> 2019-06-13 07:11:25,827 - mom.Controllers.Balloon - INFO - Ballooning 
> guest:node1 from 732260 to 695647
> 2019-06-13 07:11:40,973 - mom.Controllers.Balloon - INFO - Ballooning 
> guest:node1 from 695648 to 660865
> 2019-06-13 07:12:51,437 - mom.GuestMonitor.Thread - INFO - GuestMonitor-node1 
> ending
>
> Can someone clarify what exactly does this (from  to ) mean ?
>
> Best Regards,
> Strahil Nikolov
>
> В четвъртък, 13 юни 2019 г., 17:27:01 ч. Гринуич+3, Martin Sivak 
>  написа:
>
>
> Hi,
>
> iirc the guest agent is not needed anymore as we get almost the same
> stats from the balloon driver directly.
>
> Ballooning has to be enabled on cluster level though. So that is one
> thing to check. If that is fine then I guess a more detailed
> description is needed.
>
> oVirt generally starts ballooning when the memory load gets over 80%
> of available memory.
>
> The host agent that handles ballooning is called mom and the logs are
> located in /var/log/vdsm/mom* iirc. It might be a good idea to check
> whether the virtual machines were declared ready (meaning all data
> sources we collect provided data).
>
> --
> Martin Sivak
> used to be maintainer of mom
>
> On Thu, Jun 13, 2019 at 12:26 AM Darrell Budic  wrote:
> >
> > Do you have the overt-guest-agent running on your VMs? It’s required for 
> > ballooning to control allocations on the guest side.
> >
> > On Jun 12, 2019, at 11:32 AM, Strahil  wrote:
> >
> > Hello All,
> >
> > as a KVM user I know how usefull is the memory balloon and how you can both 
> > increase - and also decrease memory live (both Linux & Windows).
> > I have noticed that I cannot decrease the memory in oVirt.
> >
> > Does anyone got a clue why the situation is like that ?
> >
> > I was expecting that the guaranteed memory is the minimum to which the 
> > balloon driver will not go bellow, but when I put my host under pressure - 
> > the host just started to swap instead of reducing some of the VM memory 
> > (and my VMs had plenty of free space).
> >
> > It will be great if oVirt can decrease the memory (if the VM has 
> > unallocated memory) when the host is under pressure and the VM cannot be 
> > relocated.
> >
> > Best Regards,
> > Strahil Nikolov
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/LUWCN2MLNTDJUEZBCTVXFMVABGPUSEFH/
> >
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/22XYJD7XAYZLVYCJUB6TW3RZ5VJFJ3ET/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 

[ovirt-users] Re: High Performance VM: trouble using vNUMA and hugepages

2019-06-14 Thread Matthias Leopold

https://bugzilla.redhat.com/show_bug.cgi?id=1720558

Am 13.06.19 um 15:42 schrieb Andrej Krejcir:

Hi,

this is probably a bug. Can you open a new ticket in Bugzilla?
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine

As a workaround, if you are sure that the VM's NUMA configuration is 
compatible with the host's NUMA configuration, you could create a custom 
cluster scheduling policy and disable the "NUMA" filter. In 
Administration -> Configure -> Scheduling Policies.



Regards,
Andrej


On Thu, 13 Jun 2019 at 12:49, Matthias Leopold 
> wrote:

 > Hi,
 >
 > I'm having trouble using vNUMA and hugepages at the same time:
 >
 > - hypervisor host hast 2 CPU and 768G RAM
 > - hypervisor host is configured to allocate 512 1G hugepages
 > - VM configuration
 > * 2 virtual sockets, vCPUs are evenly pinned to 2 physical CPUs
 > * 512G RAM
 > * 2 vNUMA nodes that are pinned to the 2 host NUMA nodes
 > * custom property "hugepages=1048576"
 > - VM is the only VM on hypervisor host
 >
 > when I want to start the VM I'm getting the error message
 > "The host foo did not satisfy internal filter NUMA because cannot
 > accommodate memory of VM's pinned virtual NUMA nodes within host's
 > physical NUMA nodes"
 > VM start only works when VM memory is shrunk so that it fits in (host
 > memory - allocated huge pages)
 >
 > I don't understand why this happens. Can someone explain to me how this
 > is supposed to work?
 >
 > oVirt engine is 4.3.3
 > oVirt host is 4.3.4
 >
 > thanks
 > matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DTTUJE37ZYYP3JUWKDQARHXAXKOMS2DF/


[ovirt-users] Re: Windows Server 2019: Driver Signature Enforcement

2019-06-14 Thread Vadim Rozenfeld
On Thu, 2019-06-13 at 15:24 -0300, Vinícius Ferrão wrote:
> Lev, thanks for the reply.
> So basically Windows on Secureboot UEFI is simply “broken” within
> oVirt?
> 
> Will Red Hat reconsider this? Since one of the “selling points” of
> oVirt 4.3 was UEFI support. Can the RH WHQL drivers be shipped with
> oVirt?

Technically, WHQL-signing is not required to satisfy secure boot
requirements. UEFI signing should be enough. But it might be some
license issues.https://techcommunity.microsoft.com/t5/Windows-Hardware-
Certification/Microsoft-UEFI-CA-Signing-policy-updates/ba-p/364828

> Thanks,
> 
> > On 13 Jun 2019, at 07:25, Lev Veyde  wrote:
> > Hi,
> > 
> > I think that it's expected behaviour.
> > 
> > In secure mode only WHQL'd drivers are allowed to be loaded into
> > the OS kernel, and while RHEV/RHV provides WHQL'd drivers, oVirt
> > users receive RH signed ones, which from the OS standpoint are
> > basically not certified.
> > 
> > Thanks in advance,
> > 
> > On Thu, Jun 13, 2019 at 1:12 PM Yedidyah Bar David  > > wrote:
> > > On Mon, Jun 10, 2019 at 10:54 PM Vinícius Ferrão  > > hpc.com.br> wrote:
> > > > RHV drivers works.
> > > > 
> > > > oVirt drivers does not.
> > > > 
> > > > 
> > > > 
> > > > Checked this now.
> > > > 
> > > > 
> > > > 
> > > > I’m not sure if this is intended or not. But oVirt drivers
> > > > aren’t signed for Windows.
> > > 
> > > oVirt's drivers are simply copied from virtio-win repositories.
> > > Adding Vadim from virtio-win team.
> > > Best regards,
> > >  
> > > > 
> > > > > On 29 May 2019, at 21:41, mich...@wanderingmad.com wrote:
> > > > 
> > > > > 
> > > > 
> > > > > I'm running server 2012R2, 2016, and 2019 with no issue using
> > > > the Redhat signed drivers from RHEV.
> > > > 
> > > > > ___
> > > > 
> > > > > Users mailing list -- users@ovirt.org
> > > > 
> > > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > 
> > > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > > 
> > > > > oVirt Code of Conduct: https://www.ovirt.org/community/about/
> > > > community-guidelines/
> > > > 
> > > > > List Archives: https://lists.ovirt.org/archives/list/users@ov
> > > > irt.org/message/GBQEMLOW5DCSW7XSLNZKNX532BQHRFUB/
> > > > 
> > > > ___
> > > > 
> > > > Users mailing list -- users@ovirt.org
> > > > 
> > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > 
> > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > > 
> > > > oVirt Code of Conduct: https://www.ovirt.org/community/about/co
> > > > mmunity-guidelines/
> > > > 
> > > > List Archives: https://lists.ovirt.org/archives/list/users@ovir
> > > > t.org/message/PNZWHYEWW23N4GQKHMJ2RUSQR363NSYH/
> > > > 
> > > 
> > > 
> > > -- 
> > > Didi
> > > 
> > > ___
> > > 
> > > Users mailing list -- users@ovirt.org
> > > 
> > > To unsubscribe send an email to users-le...@ovirt.org
> > > 
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > 
> > > oVirt Code of Conduct: https://www.ovirt.org/community/about/comm
> > > unity-guidelines/
> > > 
> > > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.
> > > org/message/OIBSOZEBGSF65JEVMNMPOCAB6ICTVKIU/
> > > 
> > 
> > -- 
> > 
> > Lev Veyde
> > Software Engineer, RHCE | RHCVA | MCITP
> > Red Hat Israel
> > 
> > 
> > 
> > l...@redhat.com | lve...@redhat.com
> > 
> > 
> > 
> >   
> > 
> > TRIED. TESTED. TRUSTED.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VW3VU46FEB5UITUF54XRHI4FOI6TSR26/


[ovirt-users] Ovirt engine setup Error

2019-06-14 Thread PS Kazi
I am getting error while Ovirt Engine Setup:
Ovirt Node version 4.3.4

[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 90, "changed": true, 
"cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:5f:0e:ea | awk '{ 
print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.086952", "end": "2019-06-14 
12:05:37.249240", "rc": 0, "start": "2019-06-14 12:05:37.162288", "stderr": "", 
"stderr_lines": [], "stdout": "", "stdout_lines": []}
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts 
for the local VM]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system 
may not be provisioned according to the playbook results: please check the logs 
for the issue, fix accordingly or re-deploy from scratch.\n"}

Please help me
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PQK53I7SFYBTYYZ3AMQRGUO7VVMWJZRK/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-14 Thread Milan Zamazal
Alex McWhirter  writes:

> In this case, i should be able to edit /etc/libvirtd/qemu.conf on all
> the nodes to disable dynamic ownership as a temporary measure until
> this is patched for libgfapi?

No, other devices might have permission problems in such a case.

> On 2019-06-13 10:37, Milan Zamazal wrote:
>> Shani Leviim  writes:
>>
>>> Hi,
>>> It seems that you hit this bug:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1666795
>>>
>>> Adding +Milan Zamazal , Can you please confirm?
>>
>> There may still be problems when using GlusterFS with libgfapi:
>> https://bugzilla.redhat.com/1719789.
>>
>> What's your Vdsm version and which kind of storage do you use?
>>
>>> *Regards,*
>>>
>>> *Shani Leviim*
>>>
>>>
>>> On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter 
>>> wrote:
>>>
 after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
 images are become owned by root:root. Live migration succeeds and
 the vm
 stays up, but after shutting down the VM from this point, starting
 it up
 again will cause it to fail. At this point i have to go in and change
 the permissions back to vdsm:kvm on the images, and the VM will boot
 again.
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/36Z6BB5NGYEEFMPRTDYKFJVVBPZFUCBL/


[ovirt-users] Ovirt engine setup Err'or

2019-06-14 Thread PS Kazi
I am getting following error while ovirt engine setup.
I am using ovirt node 4.3.4
please help
___
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 90, "changed": true, 
"cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:5f:0e:ea | awk '{ 
print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.086952", "end": "2019-06-14 
12:05:37.249240", "rc": 0, "start": "2019-06-14 12:05:37.162288", "stderr": "", 
"stderr_lines": [], "stdout": "", "stdout_lines": []}
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts 
for the local VM]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system 
may not be provisioned according to the playbook results: please check the logs 
for the issue, fix accordingly or re-deploy from scratch.\n"}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4DOSJSXFJIAV2GF4EDXWNGF3J75DPPNW/


[ovirt-users] Re: Ovirt hiperconverged setup error

2019-06-14 Thread PS Kazi
After up-gradation to Ovirt Node 4.3.4, Problem has disrepair. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JJHSIAWAWIUM46FC6CIG7FLA7YSVACGI/


[ovirt-users] Re: The CPU type of the cluster is unknown. Its possible to change the cluster cpu or set a different one per VM.

2019-06-14 Thread Strahil
In the docs, the proper order to update is:
1. update ovirt-setup packages on the engine
yum update ovirt\*setup\*
2. update the engine
engine-setup
3. update the OS on the engine
yum update
4. reboot the engine if kernel/glibc/systemd got updated
5 upgrade hosts 1 by 1

Best Regards,
Strahil NikolovOn Jun 13, 2019 16:27, sandeepkumar...@gmail.com wrote:
>
> Thank you. 
>
> I upgraded all three hosts one by one to oVirt-4.3 and then HE. After the 
> upgrade, I was able to change the CPU Type. 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NGUEEF4DVLBXLUOHMSS2AYK6FE45VSXM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AZDBCK475LJ67MDCDTCBREM45LBN4HBR/


[ovirt-users] Re: Info about soft fencing mechanism

2019-06-14 Thread Strahil

On Jun 13, 2019 16:14, Gianluca Cecchi  wrote:
>
> Hello,
> I would like to know in better detail how soft fencing works in 4.3.
> In particular, with "soft fencing" we "only" mean vdsmd restart attempt, 
> correct?
> Who is responsible for issuing the command? Manager or host itself?

The manager should take the decision, but the actual command should be done by 
another  host.

> Because in case of Manager, if the host has already lost connection, how 
> could the manager be able to do it?

Soft fencing is ussed when ssh is available. In all other cases it doesn't work.
> Thanks in advance for clarifications and eventually documentation pointers

oVirt DOCs need a lot of updates, but I never found a way to add or edit a page.

Best Regards,
Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OQIENJDAWQNHORWFLSUYWJKH7SS7E5JE/


[ovirt-users] Re: Hosted Engine Abruptly Stopped Responding - Unexpected Shutdown

2019-06-14 Thread Strahil
Hi Maria,

I guess the memory usage is very specidic to the environment.

My setup includes 2 VDOs, 8 gluster volumes , 2 clusters, 12 VMs and only 1 
user - the built-in admin .
In result my engine is using 4GB of RAM.
How many users/storages/clusters/VMs do you have ?

When you login on the engine, what is the process eating most of the RAM?
My suspicion is the DB. If so, maybe someone else can advise if performing 
vacuum on DB during upgrade will be beneficial.

Best Regards,
Strahil NikolovOn Jun 13, 2019 15:55, souvaliotima...@mail.com wrote:
>
> Hello and thank you very much for your reply. 
>
> I'm terribly sorry for being so late to respond. 
>
> I thought the same, that dropping the cache was more of a workaround and not 
> a real solution but truthfully I was stuck and can't think of anything more 
> than how much I need to upgrade the memory on the nodes. I try to find info 
> about other ovirt virtualization set-ups and the amount of memory allocated 
> so I can get an idea of what my set-up needs. The only thing that I found was 
> that one admin had set ovirt up with 128GB and still needed more because of 
> the growing needs of the system and its users and was about to upgrade its 
> memory too. I'm just worried that ovirt is very memory consuming and no 
> matter how much I will "feed" it, it will still ask for more. Also, I'm 
> worried that there one, two or even more tweaks in the configurations that I 
> still miss and they'd be able to solve the memory problem. 
>
> Anyway, KSM is enabled. Sar shows that the committed memory when a Windows 10 
> VM is active too (alongside Hosted Engine of course, and two Linux VMs - 1 
> CentOS, 1 Debian) is around 89% in the specific host that it runs (together 
> with the Debian VM) and has reached up to 98%. 
>
> You are correct about the monitoring system too. I have set up a PRTG 
> environment and there's Nagios running but they can't yet see ovirt. I will 
> set them up correctly the next few days. 
>
> I haven't made any changes to my tuned profile. it's the default from ovirt. 
> Specifically, the active profile says it's set to virtual-host. 
>
> Again I'm very sorry for taking me so long to reply and thank you very much 
> for your response. 
>
> Best Regards, 
> Maria Souvalioti
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4YELWF5L4AKUT3OH4C4QJHHEEJPCI3G/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZTBB7JJY2OLEIF7UZOIRB2BNG35GIXE2/


[ovirt-users] Re: Windows Server 2019: Driver Signature Enforcement

2019-06-14 Thread Vadim Rozenfeld
On Thu, 2019-06-13 at 11:36 +0300, Yedidyah Bar David wrote:
> On Mon, Jun 10, 2019 at 10:54 PM Vinícius Ferrão  com.br> wrote:
> > RHV drivers works.
> > 
> > oVirt drivers does not.
> > 
> > 
> > 
> > Checked this now.
> > 
> > 
> > 
> > I’m not sure if this is intended or not. But oVirt drivers aren’t
> > signed for Windows.
> 
> oVirt's drivers are simply copied from virtio-win repositories.
> 
> Adding Vadim from virtio-win team.
> 
> Best regards,

Hi guys,
Can you please help us to reproduce the problem?
It will be great if you can provide me with the followinginformation:
- qemu and host kernel versions,- qemu command line,- RHV and oVirt
drivers versions.
Cheers,Vadim.

>  
> > 
> > > On 29 May 2019, at 21:41, mich...@wanderingmad.com wrote:
> > 
> > > 
> > 
> > > I'm running server 2012R2, 2016, and 2019 with no issue using the
> > Redhat signed drivers from RHEV.
> > 
> > > ___
> > 
> > > Users mailing list -- users@ovirt.org
> > 
> > > To unsubscribe send an email to users-le...@ovirt.org
> > 
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > 
> > > oVirt Code of Conduct: https://www.ovirt.org/community/about/comm
> > unity-guidelines/
> > 
> > > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.
> > org/message/GBQEMLOW5DCSW7XSLNZKNX532BQHRFUB/
> > 
> > ___
> > 
> > Users mailing list -- users@ovirt.org
> > 
> > To unsubscribe send an email to users-le...@ovirt.org
> > 
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > 
> > oVirt Code of Conduct: https://www.ovirt.org/community/about/commun
> > ity-guidelines/
> > 
> > List Archives: https://lists.ovirt.org/archives/list/us...@ovirt.or
> > g/message/PNZWHYEWW23N4GQKHMJ2RUSQR363NSYH/
> > ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/33ZCZ4U4SX7Z6KAVR6KHWZPOEPENBHRQ/


[ovirt-users] Re: oVirt engine/engine-setup with other port than default HTTPS 443 possible?

2019-06-14 Thread Dirk Rydvan
The idea ist quite good - a modified answer file with the suggested values 
seems to work at beginning:


 INFO  ] Restarting httpd
  Please use the user 'admin@internal' and password specified in order 
to login
  Web access is enabled at:
  http://hypervisor.local:2080/ovirt-engine
  https://hypervisor.local:2443/ovirt-engine
  Internal CA B1:02:C1:A4:7A:70:18:22:F5:4C:55:B3:F6:B3:6A:3D:BF:EF4
  SSH fingerprint: SHA256:mj941Nk0yz2lrt0laognOHgTK18nP+zO6b4fPfXa3wM
 
  --== END OF SUMMARY ==--


OK - I add 2443 manualy to firewall:


firewall-cmd --permanent --add-port=2443/tcp --zone=public
firewall-cmd --reload


OK - httpd conf is still 443 instead of 2443, I change it manualy:


 vi /etc/httpd/conf.d/ssl.conf


I see, that dissabling selinux is necessary - I do so:


vi /etc/sysconfig/selinux 
reboot


it still not work - in the Browser:


Errorcode: SSL_ERROR_RX_RECORD_TOO_LONG 


and no idea anymore
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NBSGYKDE72KGCJXUV25K2DY533KMZOBU/


[ovirt-users] Re: Memory ballon question

2019-06-14 Thread Strahil Nikolov
 Hi Martin,Darrell,
thanks for your feedback.
I have checked the /var/log/vdsm/mom.log and it seems that MOM was actually 
working:
2019-06-13 07:08:47,690 - mom.GuestMonitor.Thread - INFO - GuestMonitor-node1 
starting2019-06-13 07:09:39,490 - mom.Controllers.Balloon - INFO - Ballooning 
guest:node1 from 1048576 to 9961472019-06-13 07:09:54,658 - 
mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 996148 to 
9463402019-06-13 07:10:09,853 - mom.Controllers.Balloon - INFO - Ballooning 
guest:node1 from 946340 to 8990232019-06-13 07:10:25,053 - 
mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 899024 to 
8540722019-06-13 07:10:40,233 - mom.Controllers.Balloon - INFO - Ballooning 
guest:node1 from 854072 to 8113682019-06-13 07:10:55,428 - 
mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 811368 to 
7707992019-06-13 07:11:10,621 - mom.Controllers.Balloon - INFO - Ballooning 
guest:node1 from 770800 to 7322602019-06-13 07:11:25,827 - 
mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 732260 to 
6956472019-06-13 07:11:40,973 - mom.Controllers.Balloon - INFO - Ballooning 
guest:node1 from 695648 to 6608652019-06-13 07:12:51,437 - 
mom.GuestMonitor.Thread - INFO - GuestMonitor-node1 ending
Can someone clarify what exactly does this (from  to ) mean ?
Best Regards,Strahil Nikolov
В четвъртък, 13 юни 2019 г., 17:27:01 ч. Гринуич+3, Martin Sivak 
 написа:  
 
 Hi,

iirc the guest agent is not needed anymore as we get almost the same
stats from the balloon driver directly.

Ballooning has to be enabled on cluster level though. So that is one
thing to check. If that is fine then I guess a more detailed
description is needed.

oVirt generally starts ballooning when the memory load gets over 80%
of available memory.

The host agent that handles ballooning is called mom and the logs are
located in /var/log/vdsm/mom* iirc. It might be a good idea to check
whether the virtual machines were declared ready (meaning all data
sources we collect provided data).

--
Martin Sivak
used to be maintainer of mom

On Thu, Jun 13, 2019 at 12:26 AM Darrell Budic  wrote:
>
> Do you have the overt-guest-agent running on your VMs? It’s required for 
> ballooning to control allocations on the guest side.
>
> On Jun 12, 2019, at 11:32 AM, Strahil  wrote:
>
> Hello All,
>
> as a KVM user I know how usefull is the memory balloon and how you can both 
> increase - and also decrease memory live (both Linux & Windows).
> I have noticed that I cannot decrease the memory in oVirt.
>
> Does anyone got a clue why the situation is like that ?
>
> I was expecting that the guaranteed memory is the minimum to which the 
> balloon driver will not go bellow, but when I put my host under pressure - 
> the host just started to swap instead of reducing some of the VM memory (and 
> my VMs had plenty of free space).
>
> It will be great if oVirt can decrease the memory (if the VM has unallocated 
> memory) when the host is under pressure and the VM cannot be relocated.
>
> Best Regards,
> Strahil Nikolov
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LUWCN2MLNTDJUEZBCTVXFMVABGPUSEFH/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/22XYJD7XAYZLVYCJUB6TW3RZ5VJFJ3ET/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7UDVZJ3EDXYNMI7DCZTA7YLWNIQU3EVO/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HLFHUFRYF6R2727TSWMLJ52UX6TPOW7B/


[ovirt-users] Re: Windows Server 2019: Driver Signature Enforcement

2019-06-14 Thread Vinícius Ferrão
Lev, thanks for the reply.

So basically Windows on Secureboot UEFI is simply “broken” within oVirt?

Will Red Hat reconsider this? Since one of the “selling points” of oVirt 4.3 
was UEFI support. Can the RH WHQL drivers be shipped with oVirt?

Thanks,

> On 13 Jun 2019, at 07:25, Lev Veyde  wrote:
> 
> Hi,
> 
> I think that it's expected behaviour.
> 
> In secure mode only WHQL'd drivers are allowed to be loaded into the OS 
> kernel, and while RHEV/RHV provides WHQL'd drivers, oVirt users receive RH 
> signed ones, which from the OS standpoint are basically not certified.
> 
> Thanks in advance,
> 
> On Thu, Jun 13, 2019 at 1:12 PM Yedidyah Bar David  > wrote:
> On Mon, Jun 10, 2019 at 10:54 PM Vinícius Ferrão  > wrote:
> RHV drivers works.
> oVirt drivers does not.
> 
> Checked this now.
> 
> I’m not sure if this is intended or not. But oVirt drivers aren’t signed for 
> Windows.
> 
> oVirt's drivers are simply copied from virtio-win repositories.
> 
> Adding Vadim from virtio-win team.
> 
> Best regards,
>  
> 
> > On 29 May 2019, at 21:41, mich...@wanderingmad.com 
> >  wrote:
> > 
> > I'm running server 2012R2, 2016, and 2019 with no issue using the Redhat 
> > signed drivers from RHEV.
> > ___
> > Users mailing list -- users@ovirt.org 
> > To unsubscribe send an email to users-le...@ovirt.org 
> > 
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> > 
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/ 
> > 
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBQEMLOW5DCSW7XSLNZKNX532BQHRFUB/
> >  
> > 
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PNZWHYEWW23N4GQKHMJ2RUSQR363NSYH/
>  
> 
> 
> 
> -- 
> Didi
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OIBSOZEBGSF65JEVMNMPOCAB6ICTVKIU/
>  
> 
> 
> 
> -- 
> 
> LEV VEYDE
> SOFTWARE ENGINEER, RHCE | RHCVA | MCITP
> Red Hat Israel
> 
>  
> l...@redhat.com  | lve...@redhat.com 
>   
> TRIED. TESTED. TRUSTED. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZTSEJCIWBMUKXZSZL7B5EUDYSU3V3U57/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-14 Thread Alex McWhirter
In this case, i should be able to edit /etc/libvirtd/qemu.conf on all 
the nodes to disable dynamic ownership as a temporary measure until this 
is patched for libgfapi?



On 2019-06-13 10:37, Milan Zamazal wrote:

Shani Leviim  writes:


Hi,
It seems that you hit this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1666795

Adding +Milan Zamazal , Can you please confirm?


There may still be problems when using GlusterFS with libgfapi:
https://bugzilla.redhat.com/1719789.

What's your Vdsm version and which kind of storage do you use?


*Regards,*

*Shani Leviim*


On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter  
wrote:



after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
images are become owned by root:root. Live migration succeeds and the 
vm
stays up, but after shutting down the VM from this point, starting it 
up

again will cause it to fail. At this point i have to go in and change
the permissions back to vdsm:kvm on the images, and the VM will boot
again.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N64DCWMJNVUTE2DKVZ4MFN7GATQ5INJL/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-14 Thread Alex McWhirter

Yes we are using GlusterFS distributed replicate with libgfapi

VDSM 4.30.17

On 2019-06-13 10:37, Milan Zamazal wrote:

Shani Leviim  writes:


Hi,
It seems that you hit this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1666795

Adding +Milan Zamazal , Can you please confirm?


There may still be problems when using GlusterFS with libgfapi:
https://bugzilla.redhat.com/1719789.

What's your Vdsm version and which kind of storage do you use?


*Regards,*

*Shani Leviim*


On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter  
wrote:



after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
images are become owned by root:root. Live migration succeeds and the 
vm
stays up, but after shutting down the VM from this point, starting it 
up

again will cause it to fail. At this point i have to go in and change
the permissions back to vdsm:kvm on the images, and the VM will boot
again.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SEBMA6WG6LGCTYVNQSBLTZDCSVK6QRDN/


[ovirt-users] Re: High Performance VM: trouble using vNUMA and hugepages

2019-06-14 Thread Matthias Leopold

Hi,

thanks, this sounds good to me (in the sense of: I didn't make an 
obvious mistake). I'll open a bug report ASAP, probably tomorrow.


Regards
Matthias

Am 13.06.19 um 15:42 schrieb Andrej Krejcir:

Hi,

this is probably a bug. Can you open a new ticket in Bugzilla?
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine

As a workaround, if you are sure that the VM's NUMA configuration is 
compatible with the host's NUMA configuration, you could create a custom 
cluster scheduling policy and disable the "NUMA" filter. In 
Administration -> Configure -> Scheduling Policies.



Regards,
Andrej


On Thu, 13 Jun 2019 at 12:49, Matthias Leopold 
> wrote:

 > Hi,
 >
 > I'm having trouble using vNUMA and hugepages at the same time:
 >
 > - hypervisor host hast 2 CPU and 768G RAM
 > - hypervisor host is configured to allocate 512 1G hugepages
 > - VM configuration
 > * 2 virtual sockets, vCPUs are evenly pinned to 2 physical CPUs
 > * 512G RAM
 > * 2 vNUMA nodes that are pinned to the 2 host NUMA nodes
 > * custom property "hugepages=1048576"
 > - VM is the only VM on hypervisor host
 >
 > when I want to start the VM I'm getting the error message
 > "The host foo did not satisfy internal filter NUMA because cannot
 > accommodate memory of VM's pinned virtual NUMA nodes within host's
 > physical NUMA nodes"
 > VM start only works when VM memory is shrunk so that it fits in (host
 > memory - allocated huge pages)
 >
 > I don't understand why this happens. Can someone explain to me how this
 > is supposed to work?
 >
 > oVirt engine is 4.3.3
 > oVirt host is 4.3.4
 >
 > thanks
 > matthias


--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FEHP3UCR7UYMLWHJQFLWOFOJTDK7B35X/