[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-08 Thread Jayme
IMO this is best handled at hardware level with UPS and battery/flash
backed controllers. Can you share more details about your oVirt setup? How
many servers are you working with andare you using replica 3 or replica 3
arbiter?

On Thu, Oct 8, 2020 at 9:15 AM Jarosław Prokopowski 
wrote:

> Hi Guys,
>
> I had a situation 2 times that due to unexpected power outage something
> went wrong and VMs on glusterfs where not recoverable.
> Gluster heal did not help and I could not start the VMs any more.
> Is there a way to make such setup bulletproof?
> Does it matter which volume type I choose - raw or qcow2? Or thin
> provision versus reallocated?
> Any other advise?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRM6H2YENBP3AHQ5JWSFXH6UT6J6SDQS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EG54VXKWJMXY5IQWCHJ4BIG7CL2WEXJC/


[ovirt-users] Re: oVirt Survey Autumn 2020

2020-10-06 Thread Jayme
https://docs.google.com/forms/u/1/d/e/1FAIpQLSdzzh_MSsSq-LSQLauJzuaHC0Va1baXm84A_9XBCIileLNSPQ/viewform?usp=send_form


On Tue, Oct 6, 2020 at 7:28 PM Strahil Nikolov via Users 
wrote:

> Hello All,
>
>
>
> can someone send me the full link (not the short one) as my proxy is
> blocking it :)
>
>
>
> Best Regards,
>
> Strahil Nikolov
>
>
>
>
>
>
>
>
>
>
>
>
>
> В вторник, 6 октомври 2020 г., 15:26:57 Гринуич+3, Sandro Bonazzola <
> sbona...@redhat.com> написа:
>
>
>
>
>
>
>
>
>
>
>
> Just a kind reminder about the survey (https://forms.gle/bPvEAdRyUcyCbgEc7)
> closing on October 18th
>
>
>
> Il giorno mer 23 set 2020 alle ore 11:11 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
> > As we continue to develop oVirt 4.4, the Development and Integration
> teams at Red Hat would value insights on how you are deploying the oVirt
> environment.
>
> > Please help us to hit the mark by completing this short survey.
>
> > The survey will close on October 18th 2020. If you're managing multiple
> oVirt deployments with very different use cases or very different
> deployments you can consider answering this survey multiple times.
>
> >
>
> > Please note the answers to this survey will be publicly accessible.
>
> > This survey is under oVirt Privacy Policy available at
> https://www.ovirt.org/site/privacy-policy.html .
>
>
>
> and the privacy link was wrong, the right one:
> https://www.ovirt.org/privacy-policy.html (no content change, only url
> change)
>
>
>
>
>
> >
>
> >
>
> > The survey is available https://forms.gle/bPvEAdRyUcyCbgEc7
>
> >
>
> > --
>
> > Sandro Bonazzola
>
> > MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> > Red Hat EMEA
>
> >
>
> > sbona...@redhat.com
>
> >
>
> >
>
> >
>
> > Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
>
> >
>
> >
>
>
>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA
>
>
>
> sbona...@redhat.com
>
>
>
>
>
>
>
> Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
>
>
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
>
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IJEW35XLR6WBM45DKYMZQ2UOZRWYXHKY/
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QYKXU7P2DNXPGZ2MOBBXVMJYA6DIND2S/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DAISGM2VUZ73SWAU5OALNXM35W7GCAVT/


[ovirt-users] Re: Replica Question

2020-09-28 Thread Jayme
It might be possible to do something similar as described in the
documentation here:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
-- but I'm not sure if oVirt HCI would support it. You might have to roll
out your own GlusterFS storage solution. Someone with more Gluster/HCI
knowledge might know better.

On Mon, Sep 28, 2020 at 1:26 PM C Williams  wrote:

> Jayme,
>
> Thank for getting back with me !
>
> If I wanted to be wasteful with storage, could I start with an initial
> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
> solve split-brains for 4 bricks ?
>
> Thank You For Your Help !
>
> On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
>
>> You can only do HCI in multiple's of 3. You could do a 3 server HCI setup
>> and add the other two servers as compute nodes or you could add a 6th
>> server and expand HCI across all 6
>>
>> On Mon, Sep 28, 2020 at 12:28 PM C Williams 
>> wrote:
>>
>>> Hello,
>>>
>>> We recently received 5 servers. All have about 3 TB of storage.
>>>
>>> I want to deploy an oVirt HCI using as much of my storage and compute
>>> resources as possible.
>>>
>>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>>>
>>> I have deployed replica 3s and know about replica 2 + arbiter -- but an
>>> arbiter would not be applicable here -- since I have equal storage on all
>>> of the planned bricks.
>>>
>>> Thank You For Your Help !!
>>>
>>> C Williams
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CECMD2SWBSBDAFP3TFMMYWTSV3UKU72E/


[ovirt-users] Re: Replica Question

2020-09-28 Thread Jayme
You can only do HCI in multiple's of 3. You could do a 3 server HCI setup
and add the other two servers as compute nodes or you could add a 6th
server and expand HCI across all 6

On Mon, Sep 28, 2020 at 12:28 PM C Williams  wrote:

> Hello,
>
> We recently received 5 servers. All have about 3 TB of storage.
>
> I want to deploy an oVirt HCI using as much of my storage and compute
> resources as possible.
>
> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>
> I have deployed replica 3s and know about replica 2 + arbiter -- but an
> arbiter would not be applicable here -- since I have equal storage on all
> of the planned bricks.
>
> Thank You For Your Help !!
>
> C Williams
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5OIW7L4G6R5YMABQQ67JSS2BHB73QJT/


[ovirt-users] Re: Node 4.4.1 gluster bricks

2020-09-25 Thread Jayme
Assuming you don't care about data on the drive you may just need to use
wipefs on the device i.e. wipefs -a /dev/sdb

On Fri, Sep 25, 2020 at 12:53 PM Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> wrote:

> Hello,
>   how do you manage a gluster host when upgrading a node?
>
> I upgraded/replaced 2 nodes with the new install and can't recreate any
> gluster bricks.
> 1 node I wiped it clean and the other I left the 3 gluster brick drives
> untouched.
>
> If I try to create bricks using the UI on the nodes, I get an internal
> server error. When I try to create a PV from the clean disk, I get device
> excluded by filter.
>
> e.g.
>
> pvcreate /dev/sdb
>
>   Device /dev/sdb excluded by a filter.
>
> pvcreate /dev/mapper/SSDSC2KB240G7R_BTYS83100E0S240AGN
>
>   Device /dev/mapper/SSDSC2KB240G7R_BTYS83100E0S240AGN excluded by a
> filter.
>
>
>
>
> Thanks,
>
>
> Paul S.
>
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/27IUR3H54G2FRS3OJHYR7ZDWDXYULUSO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MUIKIWIP67PYW5PMTMDIY37TWQWMTRRK/


[ovirt-users] Re: Node upgrade to 4.4

2020-09-24 Thread Jayme
Interested to hear how upgrading 4.3 HCI to 4.4 goes. I've been considering
it in my environment but was thinking about moving all VMs off to NFS
storage then rebuilding oVirt on 4.4 and importing.

On Thu, Sep 24, 2020 at 1:45 PM  wrote:

> I am hoping for a miracle like that, too.
>
> In the mean-time I am trying to make sure that all variants of exports and
> imports from *.ova to re-attachable NFS domains work properly, in case I
> have to start from scratch.
>
> HCI upgrades don't get the special love you'd expect after RHV's proud
> announcement that they are now ready to take on Nutanix and vSAN.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C2HVZDUABWKNFN4IJD2ILLQF5E2DUUBU/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZKWFAVDI5L2SGTAY7J4ISNRI25LRCMZ5/


[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-21 Thread Jayme
You could try setting host to maintenance and check stop gluster option,
then re-activate host or try restarting glusterd service on the host

On Mon, Sep 21, 2020 at 2:52 PM Jeremey Wise  wrote:

>
> oVirt engine shows  one of the gluster servers having an issue.  I did a
> graceful shutdown of all three nodes over weekend as I have to move around
> some power connections in prep for UPS.
>
> Came back up.. but
>
> [image: image.png]
>
> And this is reflected in 2 bricks online (should be three for each volume)
> [image: image.png]
>
> Command line shows gluster should be happy.
>
> [root@thor engine]# gluster peer status
> Number of Peers: 2
>
> Hostname: odinst.penguinpages.local
> Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
> State: Peer in Cluster (Connected)
>
> Hostname: medusast.penguinpages.local
> Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
> State: Peer in Cluster (Connected)
> [root@thor engine]#
>
> # All bricks showing online
> [root@thor engine]# gluster volume status
> Status of volume: data
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/data/data  49152 0  Y
> 11001
> Brick odinst.penguinpages.local:/gluster_br
> icks/data/data  49152 0  Y
> 2970
> Brick medusast.penguinpages.local:/gluster_
> bricks/data/data49152 0  Y
> 2646
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume data
>
> --
> There are no active volume tasks
>
> Status of volume: engine
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/engine/engine  49153 0  Y
> 11012
> Brick odinst.penguinpages.local:/gluster_br
> icks/engine/engine  49153 0  Y
> 2982
> Brick medusast.penguinpages.local:/gluster_
> bricks/engine/engine49153 0  Y
> 2657
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume engine
>
> --
> There are no active volume tasks
>
> Status of volume: iso
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/iso/iso49156 49157  Y
> 151426
> Brick odinst.penguinpages.local:/gluster_br
> icks/iso/iso49156 49157  Y
> 69225
> Brick medusast.penguinpages.local:/gluster_
> bricks/iso/iso  49156 49157  Y
> 45018
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume iso
>
> --
> There are no active volume tasks
>
> Status of volume: vmstore
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/vmstore/vmstore49154 0  Y
> 11023
> Brick odinst.penguinpages.local:/gluster_br
> icks/vmstore/vmstore49154 0  Y
> 2993
> Brick medusast.penguinpages.local:/gluster_
> bricks/vmstore/vmstore  49154 0  Y
> 2668
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
>
> Task Status of Volume 

[ovirt-users] Re: Ovirt Host Crashed

2020-09-02 Thread Jayme
I believe if you go into the storage domain in GUI there should be a tab
for vms which should list the vms then you can click the : menu and choose
import

On Wed, Sep 2, 2020 at 9:24 AM Darin Schmidt  wrote:

> I am running this as an all in one system for a test bed at home. The
> system crashed which lead me to have to reinstall the OS (CentOS 8) and I
> imported the data stores but I cannot find any way to import the VM's that
> were in the DATA store. I havent had a chance to backup/export the VMs. I
> havent been able to find anything in the documentation on how to import
> these VMs. Any suggestions or links to what Im looking for?
>
>
>
> I had to create a new DATA store as the option for importing a local data
> store wasnt an option, I assume its because the host was down. THen I
> imported the old data store.
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JLXAUHGSTLL45A4TLLJT3JL4TEINJLZR/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XS5KJTWSX4JEVO46XGMDRWYVU3SGL5KR/


[ovirt-users] Re: How to Backup a VM

2020-08-31 Thread Jayme
Thanks for letting me know, I suspected that might be the case. I’ll make a
note to fix that in the playbook

On Mon, Aug 31, 2020 at 3:57 AM Stefan Wolf  wrote:

> I think, I found the problem.
>
>
>
> It is case sensitive. For the export it is NOT case sensitive but for the
> step "wait for export" it is. I ve changed it and now it seems to be working
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RYFMBHZTJF76RT56HWUK5EV3ETB5QCSV/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JLNHT4MQ5RRQ5MVDATGSELUX27ECTB2E/


[ovirt-users] Re: How to Backup a VM

2020-08-30 Thread Jayme
Interesting I’ve not hit that issue myself. I’d think it must somehow be
related to getting the event status. Is it happening to the same vms every
time? Is there anything different about the vm names or anything that would
set them apart from the others that work?

On Sun, Aug 30, 2020 at 11:56 AM Stefan Wolf  wrote:

> OK,
>
>
>
> I ve run the backup three times .
>
> I still have two machines, where it still fails on TASK [Wait for export]
>
> I think the Problem is not the  timeout, in oVirt engine the export has
> already finished : "
>
> Exporting VM VMName as an OVA to /home/backup/in_progress/VMName.ova on
> Host kvm360"
>
> But [Wait for export] still counts to 1 exit with error and move on to the
> next task
>
>
>
> bye shb
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/W65G6ZUL6C6UJAJI627WVGITGIUUJ2XZ/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CEC5GLU5JF7S7JEMAPSWEJ675UEXR6PT/


[ovirt-users] Re: How to Backup a VM

2020-08-30 Thread Jayme
Also if you look at the blog post linked on github page it has info about
increasing the ansible timeout on ovirt engine machine. This will be
necessary when dealing with large vms that take over 2 hours to export

On Sun, Aug 30, 2020 at 8:52 AM Jayme  wrote:

> You should be able to fix by increasing the timeout variable in main.yml.
> I think the default is pretty low around @ 600 seconds (10 minutes). I have
> mine set for a few hours since I’m dealing with large vms. I’d also
> increase poll interval as well so it’s not checking for completion every 10
> seconds. I set my poll interval to 5 minutes.
>
> I backup many large vms (over 1tb) with this playbook for the past several
> months and never had a problem with it not completing.
>
> On Sun, Aug 30, 2020 at 3:39 AM Stefan Wolf  wrote:
>
>> Hello,
>>
>>
>>
>> >https://github.com/silverorange/ovirt_ansible_backup
>>
>> I am also still using 4.3.
>>
>> In my opinion this is by far the best and easiest solution for disaster
>> recovery. No need to install an appliance, and if there is a need to
>> recover, you can import the ova in every hypervisor - no databases, no
>> dependency.
>>
>>
>>
>> Sometimes I ve issues with "TASK [Wait for export] " sometime it takes to
>> long to export the ova. an I also had the problem, that the export already
>> finished, but it was not realized by the script. In ovirt the export was
>> finished and the filename was renamed from *.tmp to *.ova
>>
>>
>>
>> maybe you have an idea for me.
>>
>>
>>
>> thanks bye
>>
>> ___
>>
>> Users mailing list -- users@ovirt.org
>>
>> To unsubscribe send an email to users-le...@ovirt.org
>>
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>>
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q7TKVK5TL6HT7DQZCY354ICK5J3JRDH4/
>>
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UN2U3U3UD7ZRTJASWLQCAF34ELQSOJFN/


[ovirt-users] Re: How to Backup a VM

2020-08-30 Thread Jayme
You should be able to fix by increasing the timeout variable in main.yml. I
think the default is pretty low around @ 600 seconds (10 minutes). I have
mine set for a few hours since I’m dealing with large vms. I’d also
increase poll interval as well so it’s not checking for completion every 10
seconds. I set my poll interval to 5 minutes.

I backup many large vms (over 1tb) with this playbook for the past several
months and never had a problem with it not completing.

On Sun, Aug 30, 2020 at 3:39 AM Stefan Wolf  wrote:

> Hello,
>
>
>
> >https://github.com/silverorange/ovirt_ansible_backup
>
> I am also still using 4.3.
>
> In my opinion this is by far the best and easiest solution for disaster
> recovery. No need to install an appliance, and if there is a need to
> recover, you can import the ova in every hypervisor - no databases, no
> dependency.
>
>
>
> Sometimes I ve issues with "TASK [Wait for export] " sometime it takes to
> long to export the ova. an I also had the problem, that the export already
> finished, but it was not realized by the script. In ovirt the export was
> finished and the filename was renamed from *.tmp to *.ova
>
>
>
> maybe you have an idea for me.
>
>
>
> thanks bye
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q7TKVK5TL6HT7DQZCY354ICK5J3JRDH4/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FSD7CVYYHG2LLOJGBJFYSMY2DXOFGBUZ/


[ovirt-users] Re: How to Backup a VM

2020-08-29 Thread Jayme
Probably the easiest way is to export the VM as OVA. The OVA format is a
single file which includes the entire VM image along with the config. You
can import it back into oVirt easily as well. You can do this from the GUI
on a running VM and export to OVA without bringing the VM down. The export
process will handle the creation and deletion of the snapshot.

You can export to OVA to a directory located on one of the hosts, this
directory could be a NFS mount on an external storage server if you want.

The problem with export to OVA is that you can't put it on a schedule and
it is mostly a manual process. You can however initiate it with Ansible.

A little while ago I actually wrote an ansible playbook to backup multiple
VMs on a schedule. It was wrote for oVirt 4.3, I have not had to time to
test it with 4.4 yet

https://github.com/silverorange/ovirt_ansible_backup

On Sat, Aug 29, 2020 at 10:14 AM Stefan Wolf  wrote:

> Hello to all
>
> I try to backup a normal VM. But It seems that I don't really understand
> the concept. At first I found the possibility to backup with the api
> https://www.ovirt.org/documentation/administration_guide/#Setting_a_storage_domain_to_be_a_backup_domain_backup_domain
> .
> Create a snapshot of the VM, finding the ID of the snapshot and the
> configuration of the VM makes sense to me.
> But at this point, I would download the config an the snapshot and put it
> to my backup storage. And not create a new VM attach the disk and run a
> backup with backup programm. And for restoring do the sam way backwards.
>
> If i look at other project, there seems do be a way to download the
> snapshot and configfile, or am I wrong?
> Maybe someone can explain to me why I should use additional software to
> install in an additional machine. Or even better someone can explain to me
> how I don't have to use additional backup software.
>
> And to the same topic backup.
> There is in the documentation the possibility to set up a backup storage
> It is nearly the same, create a snapshot, or clone the machine and export
> it to backup storage
> > Export the new virtual machine to a backup domain. See Exporting a
> Virtual Machine to a Data Domain in the Virtual Machine Management Guide.
> Sadly there is just writen what to do, not how, the link points to 404
> page. maybe someone can explain to me how to use backup storage
>
> thank you very much
>
> shb
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/COR6VIV477XUFDKJAVEO2ODCESVENKLV/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CD4ZSSXPKULSF74TJNAS2USFE7YTIH2R/


[ovirt-users] Re: Incremental VM backups

2020-08-19 Thread Jayme
Vprotect can do some form of incremental backup of ovirt vms. At least on
4.3 I’m not sure where they’re at for 4.4 support. Worth checking out, free
for 10 vms

On Wed, Aug 19, 2020 at 7:03 AM Kevin Doyle 
wrote:

> Hi
>
> I am looking at ways to backup VM's, ideally that support incremental
> backups. I have found a couple of python scripts that snapshot a VM and
> back it up but not incremental. The question is what do you use to backup
> the VM's ? (both linux and windows)
>
>
>
> Thanks
>
> Kevin
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KU2FI6KCAQGTLE46YEXFPJY7KQTTAQYN/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GK3NKES3ISMHSNN3QHW65IZ2I3ZIE6LD/


[ovirt-users] Re: HA Storage options

2020-08-17 Thread Jayme
I think you are perhaps overthinking a tad. Glusterfs is a fine solution
but it has had a rocky road. It would not be my first suggestion if you are
seeking high level write performance although that has been improving and
can be fine tuned. Instability at least in the past was mostly centered
around cluster upgrades. Untouched gluster is solid and practically takes
care of itself. There are definitely more eggs in one basket when dealing
with hyperconverged in general.

AFAIK ovirt does not support drbd storage nor ceph although I think ceph
May be planned in true future. I’m not aware of any plans to abandon
glusterfs.

The best piece of advice I could offer from experience running hci over the
past few years is to not rush to updating to the latest release right away.

On Mon, Aug 17, 2020 at 8:39 PM David White via Users 
wrote:

> Hi,
> I started an email thread a couple months ago, and felt like I got some
> great feedback and suggestions on how to best setup an oVirt cluster.
> Thanks for your responses thus far.
> My goal is to take a total of 3-4 servers that I can use for *both* the
> storage *and* the virtualization, and I want both to be highly available.
>
> You guys told me about oVirt Hyperconverged with Gluster, and that seemed
> like a great option. However, I'm concerned that this may not actually be
> the best approach. I've spoken with multiple people at Red Hat who I have a
> relationship with (outside of the context of the project I'm working on
> here), and all of them have indicated to me that Gluster is being
> deprecated, and that most of the engineering focus these days is on Ceph. I
> was also told by a Solutions Architect who has extensive experience with
> RHV that the hyperconverged clusters he used to build would always give him
> problems.
>
> Does oVirt support DRBD or Ceph storage? From what I can find, I think
> that the answer to both of those is, sadly, no.
>
> So now I'm thinking about switching gears, and going with iSCSI instead.
> But I'm still trying to think about the best way to replicate the storage,
> and possibly use multipathing so that it will be HA for the VMs that rely
> on it.
>
> Has anyone else experienced problems with the Gluster hyperconverged
> solution?
> Am I overthinking this whole thing, and am I being too paranoid?
> Is it possible to setup some sort of software-RAID with multiple iSCSI
> targets?
>
> As an aside, I now have a machine that I was planning to begin doing some
> testing and practicing with.
> Previous to my conversations with the folks at Red Hat, I was planning on
> doing some initial testing and config with this server before purchasing
> another 2-3 servers to build the hyperconverged cluster.
>
>
> Sent with ProtonMail  Secure Email.
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MHYBWFGV74OUGQJVBNPK3D4HM2FQPMYC/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBTHXSUX7GOWK5NYPCCEUW2BFYZIYVBX/


[ovirt-users] Re: where is error log for OVA import

2020-07-28 Thread Jayme
Check engine.log in /var/log/ovirt-engine on the engine sever/vm

On Tue, Jul 28, 2020 at 7:16 PM Philip Brown  wrote:

> I just tried to import an OVA file.
> The GUI status mentions that things seem to go along fairly happily..
> it mentions that it creates a disk for it
> but then eventually just says
>
> "failed to import VM x into datacenter Default"
> with zero explanation.
>
> Isnt there a log file or something I can check, somewhere, to find out
> what the problem is?
>
>
> --
> Philip Brown| Sr. Linux System Administrator | Medata, Inc.
> 5 Peters Canyon Rd Suite 250
> Irvine CA 92606
> Office 714.918.1310| Fax 714.918.1325
> pbr...@medata.com| www.medata.com
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5UDCY3OHJBL7VEYUWHAZQEQHFZ6SOIK6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PQAYC3XPOYZG4G26PIETNBPAUDRET4VO/


[ovirt-users] Re: oVirt install questions

2020-07-19 Thread Jayme
You would setup three servers first in hyperconverged using either replica
3 or replica 3 arbiter 1 then add your fourth host afterward as a compute
only host that can host vms but does not participate in glusterfs storage.

On Sun, Jul 19, 2020 at 3:12 PM David White via Users 
wrote:

> Thank you.
> So to make sure I understand what you're saying, it sounds like if I need
> 4 nodes (or more), I should NOT do a "hyperconverged" installation, but
> should instead prepare Gluster separately from the oVirt Manager
> installation. Do I understand this correctly?
>
> If that is the case, can I still use some of the servers for dual purposes
> (Gluster + oVirt Manager)? I'm most likely going to need more servers for
> the storage than I will need for the RAM & CPU, which is a little bit
> opposite of what you wrote (using 3 servers for Gluster and adding
> additional nodes for RAM & CPU).
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Sunday, July 19, 2020 9:57 AM, Strahil Nikolov via Users <
> users@ovirt.org> wrote:
>
> > Hi David,
> >
>
> > it's a little bit different.
> >
>
> > Ovirt supports 'replica 3' (3 directories hsot the same content) or
> 'replica 3 arbiter 1' (2 directories host same data, third directory
> contains metadata to prevent split brain situations) volumes.
> >
>
> > If you have 'replica 3' it is smart to keep the data on separate hosts,
> although you can keep it on the same host (but then you should use no
> replica and oVirt's Single node setup).
> >
>
> > When you extend , yoou need to add bricks (fancy name for a directory)
> in the x3 count.
> >
>
> > If you wish that you want to use 5 nodes, you can go with 'replica 3
> arbiter 1' volume, where ServerA & ServerB host data and ServerC host only
> metadata (arbiter). Then you can extend and for example ServerC can host
> again metadata while ServerD & ServerE host data for the second replica set.
> >
>
> > You can even use only 3 servers for Gluster , while much more systems as
> ovirt nodes (CPU & RAM) to host VMs.
> > In case of a 4 node setup - 3 hosts have the gluster data and the 4th -
> is not part of ths gluster, just hosting VMs.
> >
>
> > Best Regards,
> > Strahil Nikolov
> >
>
> > На 19 юли 2020 г. 15:25:10 GMT+03:00, David White via Users
> users@ovirt.org написа:
> >
>
> > > Thanks again for explaining all of this to me.
> > > Much appreciated.
> > > Regarding the hyperconverged environment,
> > > reviewing
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
> ,
> > > it appears to state that you need, exactly, 3 physical servers.
> > > Is it possible to run a hyperconverged environment with more than 3
> > > physical servers?
> > > Because of the way that the gluster triple-redundancy works, I knew
> > > that I would need to size all 3 physical servers' SSD drives to store
> > > 100% of the data, but there's a possibility that 1 particular (future)
> > > customer is going to need about 10TB of disk space.
> > > For that reason, I'm thinking about what it would look like to have 4
> > > or even 5 physical servers in order to increase the total amount of
> > > disk space made available to oVirt as a whole. And then from there, I
> > > would of course setup a number of virtual disks that I would attach
> > > back to that customer's VM.
> > > So to recap, if I were to have a 5-node Gluster Hyperconverged
> > > environment, I'm hoping that the data would still only be required to
> > > replicate across 3 nodes. Does this make sense? Is this how data
> > > replication works? Almost like a RAID -- add more drives, and the RAID
> > > gets expanded?
> > > Sent with ProtonMail Secure Email.
> > > ‐‐‐ Original Message ‐‐‐
> > > On Tuesday, June 23, 2020 4:41 PM, Jayme jay...@gmail.com wrote:
> > >
>
> > > > Yes this is the point of hyperconverged. You only need three hosts to
> > > > setup a proper hci cluster. I would recommend ssds for gluster
> storage.
> > > > You could get away with non raid to save money since you can do
> replica
> > > > three with gluster meaning your data is fully replicated across all
> > > > three hosts.
> > >
>
> > > > On Tue, Jun 23, 2020 at 5:17 PM David White via Users
> > > > users@ovirt.org wrote:
> > >
>
> > > > > Thanks.
> > > > > I've only bee

[ovirt-users] Re: mixed hyperconverged?

2020-07-15 Thread Jayme
Your other hosts that aren’t participating in gluster storage would just
mount the gluster storage domains.

On Wed, Jul 15, 2020 at 6:44 PM Philip Brown  wrote:

> Hmm...
>
>
> Are you then saying, that YES, all host nodes need to be able to talk to
> the glusterfs filesystem?
>
>
> on a related note, I'd like to have as few nodes actually holding
> glusterfs data as possible, since I want that data on SSD.
> Rather than multiple "replication set" hosts, and one arbiter.. is it
> instead possible to have only 2 replication set hosts, and multiple
> (arbitrariliy many) arbiter nodes?
>
>
> - Original Message -
> From: "Strahil Nikolov" 
> To: "users" , "Philip Brown" 
> Sent: Wednesday, July 15, 2020 1:59:40 PM
> Subject: Re: [ovirt-users] Re: mixed hyperconverged?
>
> You can use   a distributed replicated volume of type 'replica 3 arbiter
> 1'.
> For example, NodeA  and NodeB are contain  replica  set 1  with NodeC as
> their arbiter and NodeD and NodeE as the second  replica set  2  with NodeC
> as thir arbiter also.
>
> In such case you got only 2 copies of a single shard, but you are fully
> "supported" from gluster perspective.
> Also, all  hosts can have an external storage like  your  NAS.
>
> Best Regards,
> Strahil Nikolov
>
> На 15 юли 2020 г. 21:11:34 GMT+03:00, Philip  Brown 
> написа:
> >arg. when I said "add 2 more nodes that arent part of the cluster", I
> >meant,
> >"part of the glusterfs cluster".
> >
> >or at minimum, maybe some kind of client-only setup, if they need
> >access?
> >
> >
> >- Original Message -
> >From: "Philip Brown" 
> >To: "users" 
> >Sent: Wednesday, July 15, 2020 10:37:48 AM
> >Subject: [ovirt-users] mixed hyperconverged?
> >
> >I'm thinking of doing an SSD based hyperconverged setup (for 4.3), but
> >am wondering about certain design issues.
> >
> >seems like the optimal number is 3 nodes for the glusterfs.
> >but.. I want 5 host nodes, not 3
> >and I want the main storage for VMs to be separate iSCSI NAS boxes.
> >Is it possible to have 3 nodes be the hyperconverged stuff.. but then
> >add in 2 "regular" nodes, that dont store anything and arent part of
> >the cluster?
> >
> >is it required to be part of the gluster cluster, to also be part of
> >the ovirt cluster, if thats where the hosted-engine lives?
> >or can I just have the hosted engine be switchable between the 3 nodes,
> >and the other 2 be VM-only hosts?
> >
> >Any recommendations here?
> >
> >I dont what 5 way replication going on. Nor do I want to have to pay
> >for large SSDs on all my host nodes.
> >(Im planning to run them with the ovirt 3.4 node image)
> >
> >
> >
> >--
> >Philip Brown| Sr. Linux System Administrator | Medata, Inc.
> >5 Peters Canyon Rd Suite 250
> >Irvine CA 92606
> >Office 714.918.1310| Fax 714.918.1325
> >pbr...@medata.com| www.medata.com
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZMXMGRGOMYE4UIQH32R6GCCHTABTGSX/
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RINQKWPRCQD5KYPFJYA75HFIUVJVTZXC/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/46IWO6CTOGJVZN2M6DMNB3AOX6B347S3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GGWUV4PJUL2HL6P6IW6PMVGNQZF5C35Z/


[ovirt-users] Re: how to get ovirt 4.3 documentation?

2020-07-13 Thread Jayme
Personally I find the rhev documentation much more complete:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/

On Mon, Jul 13, 2020 at 6:17 PM Philip Brown  wrote:

> I find it odd that the ovirt website allows to see older version RELEASE
> NOTES...
> but doesnt seem to give links to general documentation for older versions.
> For example, if you read
>
> https://www.ovirt.org/release/4.3.10/
> it says,
>
> "For complete installation, administration, and usage instructions, see
> the oVirt Documentation."
>
> but that links to the general docs page at
> https://www.ovirt.org/documentation/
>
> It does NOT link to any ovirt 4.3 docs, which is what I actually need
>
>
>
> --
> Philip Brown| Sr. Linux System Administrator | Medata, Inc.
> 5 Peters Canyon Rd Suite 250
> Irvine CA 92606
> Office 714.918.1310| Fax 714.918.1325
> pbr...@medata.com| www.medata.com
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZWLU75AKAJNT7T7C644ESHVINYIH7OQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDMCD45NU3MMD42YAOGCCSHRO3VXE27E/


[ovirt-users] Re: Ovirt 4.3.10 Glusterfs SSD slow performance over 10GE

2020-07-07 Thread Jayme
Emy,

I was wondering how much if any improvement I'd see with Gluster storage
moving to oVirt 4.4/CentOS 8x (but have not made the switch yet myself).
You should keep in mind that your Perc controllers aren't supported by
CentOS 8 out of the box, they dropped support for many older controllers.
You should still be able to get it to work using a driver update disk
during install. See: https://forums.centos.org/viewtopic.php?t=71862

Either way, this is good to know ahead of time as to limit surprises!

- Jayme

On Tue, Jul 7, 2020 at 10:22 AM shadow emy  wrote:

> i found the problem.
> The kernel version in Centos 7.8  with version 3.x.x is really too old and
> does not know how to handle fine new SSD disks or RAID Controllers with
> latest BIOS Updates applied.
>
> Booting and Archlinux latest iso image with kernel 5.7.6 or a Centos 8.2
> with kernel 4.18 increased the performance at the right values.
> I run  multiple dd tests on the above images using bs of 10, 100 and 1000M
> and  had aconstant write speed of  1.1GB/s.This is the expected value for 2
> SSD in RAID 0.
>
> I had also enabled  cache settings on the Dell Perc 710 Raid controller :
> Write Cache set to "Write Back", disk cache set to "Enabled", read cache to
> "Read Ahead".For those who think "Write back" is a problem and the data
> might be corrupted, this should be ok now with the latest filesystem xfs or
> ext4 , that can recover in case of power loss.To make data safer, i also
> have a Raid cache battery and UPS redundancy.
>
> Now i know i must run ovirt 4.4 with Centos 8.2 for good performance.
>  I saw that Upgrading from 4.3 to 4.4 is not an easy task, multiple fails
> and not quite straight forward(i also have hosted engine on the shared
> Gluster Storage which makes this ipgrade even more difficult), but
> eventually i think i can get it running.
>
> Thanks,
> Emy
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZOFENYMPKXC6Z6MHOFFAUPPQCUFDNKHO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OSDGNRHS25GUZG3RHIEHIZX66UYMGJIV/


[ovirt-users] Re: Ovirt 4.3.10 Glusterfs SSD slow performance over 10GE

2020-06-28 Thread Jayme
I’ve tried various methods to improve gluster performance on similar
hardware and never had much luck. Small file workloads were particularly
troublesome. I ended up switching high performance vms to nfs storage and
performance with nfs improved greatly in my use case.

On Sun, Jun 28, 2020 at 6:42 PM shadow emy  wrote:

> > Hello ,
>
> Hello and thank you for the reply.Bellow are the answers to your questions.
> >
> > Let me ask some questions:
> > 1. What is the scheduler for your PV ?
>
>
> On the Raid Controller device where the SSD disks are in Raid 0 (device
> sda) it is set to "deadline". But on the lvm volume logical volume dm-7,
> where the logical block is set for "data" volunr it is set to none.(i think
> this is ok )
>
>
> [root@host1 ~]# ls -al /dev/mapper/gluster_vg_sd
> v_data ter_l
> lrwxrwxrwx. 1 root root 7 Jun 28 14:14 /dev/mapper/gluster_v
> g_sda3-gluster_lv_data -> ../dm-7
> [root@host1 ~]# cat /sys/block/dm-7/queue/scheduler
> none
> root@host1:~[root@host1 ~]# cat /sys/block/dm-7/queue/schedu
> [root@host1 ~]# cat /sys/block/sda/queue/scheduler
> noop [deadline] cfq
>
>
>
> > 2. Have you aligned your PV during the setup 'pvcreate
> --dataalignment alignment_value
> > device'
>
>
> I did not made other alignment then the default.Bellow are the partitions
> on /dev/sda.
> Can i enable partition alignment now, if yes how ?
>
> sfdisk -d /dev/sda
> # partition table of /dev/sda
> unit: sectors
>
> /dev/sda1 : start= 2048, size=   487424, Id=83, bootable
> /dev/sda2 : start=   489472, size= 95731712, Id=8e
> /dev/sda3 : start= 96221184, size=3808675840, Id=83
> /dev/sda4 : start=0, size=0, Id= 0
>
>
>
> > 3. What is your tuned profile ? Do you use rhgs-random-io from
> > the
> ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/red...
> > ?
>
> My tuned active profile is virtual-host
>
> Current active profile: virtual-host
>
>  No i dont use any of the rhgs-random-io profiles
>
> > 4. What is the output of "xfs_info /path/to/your/gluster/brick" ?
>
> xfs_info /gluster_bricks/data
> meta-data=/dev/mapper/gluster_vg_sda3-gluster_lv_data isize=
> 512agcount=32, agsize=6553600 blks
>  =   sectsz=512   attr=2, projid
> 32bit=1
>  =   crc=1finobt=0 spino
> des=0
> data =   bsize=4096   blocks=2097152
> 00, imaxpct=25
>  =   sunit=64 swidth=64 blks
> naming   =version 2  bsize=8192   ascii-ci=0 fty
> pe=1
> log  =internal   bsize=4096   blocks=102400,
>  version=2
>  =   sectsz=512   sunit=64 blks,
>  lazy-count=1
> realtime =none   extsz=4096   blocks=0, rtex
> tents=0
>
> > 5. Are you using Jumbo Frames ? Does your infra support them?
> > Usually MTU of 9k is standard, but some switches and NICs support up to
> 16k.
> >
>
> Unfortunately  I can not enable MTU to 9000 and Jumbo Frames on these
> Cisco SG350X switches to specific ports.The switches  dont suport Jumbo
> Frames enable  to a single port, only on all ports .
> I have others devices connected to the switches on the remaining 48 ports
> that have  1Gb/s.
>
> > All the options for "optimize for virt" are located
> > at /var/lib/glusterd/groups/virt on each gluster node.
>
> I have already looked  previously at that file, but not all the volume
> settings  that are set by "Optime for Virt Store" are stored there.
> For example  "Optimize for Virt Store " sets network.remote.dio   to
> disable and in the glusterd/groups/virt is set to enabled.Or
> cluster.granular-entry-heal: enable is not present there, bit it is set by
> "Optimize for Virt Store"
>
> >
> > Best Regards,
> > Strahil Nikolov
> >
> >
> >
> >
> > В неделя, 28 юни 2020 г., 22:13:09 Гринуич+3, jury cat  gmail.com
> > написа:
> >
> >
> >
> >
> >
> > Hello all,
> >
> > I am using Ovirt 4.3.10 on Centos 7.8 with glusterfs 6.9 .
> > My Gluster setup is of 3 hosts in replica 3 (2 hosts + 1 arbiter).
> > All the 3 hosts are Dell R720  with Perc Raid Controller H710 mini(that
> has maximim
> > throughtout 6Gbs)  and  with 2×1TB samsumg SSD in RAID 0. The volume is
> partitioned using
> > LVM thin provision and formated XFS.
> > The hosts have separate 10GE network cards for storage traffic.
> > The Gluster Network is connected to this 10GE network cards and is
> mounted using Fuse
> > Glusterfs(NFS is disabled).Also Migration Network is activated on the
> same storage
> > network.
> >
> >
> > The problem is that the 10GE network is not used at full potential by
> the Gluster.
> > If i do live Migration of Vms i can see speeds of 7GB/s ~ 9GB/s.
> > The same network tests using iperf3 reported 9.9GB/s ,  these exluding
> the network setup
> > as a bottleneck(i will not paste all the iperf3 tests here for now).
> > I did not enable all the Volume options  from "Optimize for Virt Store",
> because
> > of the bug that cant set volume  

[ovirt-users] Re: oVirt install questions

2020-06-23 Thread Jayme
Yes this is the point of hyperconverged. You only need three hosts to setup
a proper hci cluster. I would recommend ssds for gluster storage. You could
get away with non raid to save money since you can do replica three with
gluster meaning your data is fully replicated across all three hosts.


On Tue, Jun 23, 2020 at 5:17 PM David White via Users 
wrote:

> Thanks.
> I've only been considering SSD drives for storage, as that is what I
> currently have in the cloud.
>
> I think I've seen some things in the documents about oVirt and gluster
> hyperconverged.
> Is it possible to run oVirt and Gluster together on the same hardware? So
> 3 physical hosts would run CentOS or something, and I would install oVirt
> Node + Gluster onto the same base host OS? If so, then I could probably
> make that fit into my budget.
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Monday, June 22, 2020 1:02 PM, Strahil Nikolov via Users <
> users@ovirt.org> wrote:
>
> > Hey David,
> >
>
> > keep in mind that you need some big NICs.
> > I started my oVirt lab with 1 Gbit NIC and later added 4 dual-port 1
> Gbit NICs and I had to create multiple gluster volumes and multiple storage
> domains.
> > Yet, windows VMs cannot use software raid for boot devices, thus it's a
> pain in the @$$.
> > I think that optimal is to have several 10Gbit NICs (at least 1 for
> gluster and 1 for oVirt live migration).
> > Also, NVMEs can be used as lvm cache for spinning disks.
> >
>
> > Best Regards,
> > Strahil Nikolov
> >
>
> > На 22 юни 2020 г. 18:50:01 GMT+03:00, David White
> dmwhite...@protonmail.com написа:
> >
>
> > > > For migration between hosts you need a shared storage. SAN, Gluster,
> > > > CEPH, NFS, iSCSI are among the ones already supported (CEPH is a
> little
> > > > bit experimental).
> > >
>
> > > Sounds like I'll be using NFS or Gluster after all.
> > > Thank you.
> > >
>
> > > > The engine is just a management layer. KVM/qemu has that option a
> > > > long time ago, yet it's some manual work to do it.
> > > > Yeah, this environment that I'm building is expected to grow over
> time
> > > > (although that growth could go slowly), so I'm trying to architect
> > > > things properly now to make future growth easier to deal with. I'm
> also
> > > > trying to balance availability concerns with budget constraints
> > > > starting out.
> > >
>
> > > Given that NFS would also be a single point of failure, I'll probably
> > > go with Gluster, as long as I can fit the storage requirements into the
> > > overall budget.
> > > Sent with ProtonMail Secure Email.
> > > ‐‐‐ Original Message ‐‐‐
> > > On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users
> > > users@ovirt.org wrote:
> > >
>
> > > > На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via
> > > > usersus...@ovirt.org написа:
> > >
>
> > > > > Thank you and Strahil for your responses.
> > > > > They were both very helpful.
> > >
>
> > > > > > I think a hosted engine installation VM wants 16GB RAM configured
> > > > > > though I've built older versions with 8GB RAM.
> > > > > > For modern VMs CentOS8 x86_64 recommends at least 2GB for a host.
> > > > > > CentOS7 was OK with 1, CentOS6 maybe 512K.
> > > > > > The tendency is always increasing with updated OS versions.
> > >
>
> > > > > Ok, so to clarify my question a little bit, I'm trying to figure
> > > > > out
> > > >
>
> > > > > how much RAM I would need to reserve for the host OS (or oVirt
> > > > > Node).
> > > >
>
> > > > > I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps
> > > > > that would suffice?
> > > > > And then as you noted, I would need to plan to give the engine
> > > > > 16GB.
> > >
>
> > > > I run my engine on 4Gb or RAM, but i have no more than 20 VMs, the
> > > > larger the setup - the more ram for the engine is needed.
> > >
>
> > > > > > My minimum ovirt systems were mostly 48GB 16core, but most are
> > > > > > now
> > > >
>
> > > > > > 128GB 24core or more.
> > >
>
> > > > > But this is the total amount of physical RAM in your systems,
> > > > > correct?
> > > >
>
> > > > > Not the amount that you've reserved for your host OS?I've spec'd
> > > > > out
> > > >
>
> > > > > some hardware, and am probably looking at purchasing two PowerEdge
> > > > > R820's to start, each with 64GB RAM and 32 cores.
> > >
>
> > > > > > While ovirt can do what you would like it to do concerning a
> > > > > > single
> > > >
>
> > > > > > user interface, but with what you listed,
> > > > > > you're probably better off with just plain KVM/qemu and using
> > > > > > virt-manager for the interface.
> > >
>
> > > > > Can you migrate VMs from 1 host to another with virt-manager, and
> > > > > can
> > > >
>
> > > > > you take snapshots?
> > > > > If those two features aren't supported by virt-manager, then that
> > > > > would
> > > >
>
> > > > > almost certainly be a deal breaker.
> > >
>
> > > > The engine is just a management layer. KVM/qemu has that option a
> > > > 

[ovirt-users] Re: What happens when shared storage is down?

2020-06-10 Thread Jayme
This is of course not recommended but there has been times where I have
lost network access to storage or storage sever while vms were running.
They paused and came back up when storage was available again without
causing any problems. This doesn’t mean it’s 100% safe but from my
experience it has not caused any issue.

Personally I would shutdown vms or live migrate the disk to secondary
storage then migrate it back after the updates are performed.

On Wed, Jun 10, 2020 at 2:22 AM Vinícius Ferrão via Users 
wrote:

>
>
> > On 7 Jun 2020, at 08:34, Strahil Nikolov  wrote:
> >
> >
> >
> > На 7 юни 2020 г. 1:58:27 GMT+03:00, "Vinícius Ferrão via Users" <
> users@ovirt.org> написа:
> >> Hello,
> >>
> >> This is a pretty vague and difficult question to answer. But what
> >> happens if the shared storage holding the VMs is down or unavailable
> >> for a period of time?
> > Once  a  pending I/O is blocked, libvirt will pause the VM .
> >
> >> I’m aware that a longer timeout may put the VMs on pause state, but how
> >> this is handled? Is it a time limit? Requests limit? Who manages this?
> > You got sanlock.service that notifies the engine when a storage domain
> is unaccessible for  mode than 60s.
> >
> > Libvirt also will pause  a  VM when a pending I/O cannot be done.
> >
> >> In an event of self recovery of the storage backend what happens next?
> > Usually the engine should resume the VM,  and from application
> perspective nothing has happened.
>
> Hmm thanks Strahil. I was thinking to upgrade the storage backend of one
> of my oVirt clusters without powering off the VM’s, just to be lazy.
>
> The storage does not have dual controllers, so downtime is needed. I’m
> trying to understand what happens so I can evaluate this update without
> turning off the VMs.
>
> >> Manual intervention is required? The VMs may be down or they just
> >> continue to run? It depends on the guest OS running like in XenServer
> >> where different scenarios may happen?
> >>
> >> I’ve looked here:
> >> https://www.ovirt.org/documentation/admin-guide/chap-Storage.html but
> >> there’s nothing that goes about this question.
> >>
> >> Thanks,
> >>
> >> Sent from my iPhone
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BVZAG2V3KBB364U5VBRCBIU42LJNGCI6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TKGXPENNO7RTW3S7G3GZODMNOHPULEMR/


[ovirt-users] Re: ovirt vm backup tool

2020-06-09 Thread Jayme
 I wrote a simple un-official ansible playbook to backup full VMs here:
https://github.com/silverorange/ovirt_ansible_backup -- it works great for
my use case, but it is more geared toward smaller environments.

For commercial software I'd take a look at vProtect (it's free for up to 10
VMs)

I've heard some rumblings about incremental backup support in 4.4 as some
others have suggested but don't have much knowledge on the subject.



On Tue, Jun 9, 2020 at 1:16 PM Gianluca Cecchi 
wrote:

> On Tue, Jun 9, 2020 at 5:24 PM Shani Leviim  wrote:
>
>> Hi Shashank,
>> You can use the new incremental backup feature, which available for a
>> tech preview for ovirt 4.4.
>>
>
> It seems it is not so; see this thread and errors received in 4.4 and
> latest answer from Nir:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/thread/CWLMCHTSWDNOLFUPPLOU7ORIVKHWD5GM/
>
> I too hoped to be able to test in 4.4. without going to master...
>
> Gianluca
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JFIDWTBW56OOAVOV7HHNEU2QVGRNXG3W/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FRCQPCBDU2HC3IUXTBZWFKDTG2GPFAL2/


[ovirt-users] Re: basic infra and glusterfs sizing question

2020-05-29 Thread Jayme
Also, I can't think of the limit off the top of my head. I believe it's
either 75 or 100Gb. If the engine volume is set any lower the installation
will fail. There is a minimum size requirement.

On Fri, May 29, 2020 at 12:09 PM Jayme  wrote:

> Regarding Gluster question. The volumes would be provisioned with LVM on
> the same block device. I believe 100Gb is recommended for the engine
> volume. The other volumes such as data would be created on another logical
> volume and you can use up the rest of the available space there. Ex. 100gb
> engine, 500Gb data and 400Gb vmstore.
>
> Data domains are basically the same now, in the past there used to be
> different domain types such as ISO domains which are deprecated. You don't
> really need any more than engine volume and data volume.  You could have a
> volume for storing ISOs if you wanted to. You could have a separate volume
> for OS disks and another volume for data disks which would give you more
> flexibility for backups (so that you could backup data disks but not OS for
> example).
>
> On Fri, May 29, 2020 at 10:29 AM Jiří Sléžka  wrote:
>
>> Hello,
>>
>> I am just curious if basic gluster HCI layout which is suggested in
>> cockpit has some deeper meaning.
>>
>> There are suggested 3 volumes
>>
>> * engine - it is clear, it is the volume where engine vm is running.
>> When this vm is 51GB big how small could this volume be? I have 1TB SSD
>> storage and I would like utilize it as much as possible. Could I create
>> this volume as small as this vm is? Is it safe for example for future
>> upgrades?
>>
>> * vmstore - it make sense it is a space for all other vms running in
>> oVirt. Right?
>>
>> * data - which purpose has this volume? other data like for example
>> ISOs? Direct disks?
>>
>> Another infra question... or maybe request for comment
>>
>> I have small amount of public ipv4 addresses in my housing (but I have
>> own switches there so I can create vlans and separate internal traffic).
>> I can access only these public ipv4 addresses directly. I would like to
>> conserve these addressess as much as possible so what is the best
>> approach in your opinion?
>>
>> * Install all hosts and HE with management network on private addressess
>>
>>   * have small router (hw appliance with for example LEDE) which will
>> utilize one ipv4 address and will do NAT and vpn for accessing my
>> internals vlans.
>> + looks like simple approach to me
>> - single point of failure in this router (not really - just in case
>> oVirt is badly broken and I need to access internal vlans to recover it)
>>
>>   * have this router as virtual appliance inside oVirt (something like
>> pfSense for example)
>> + no need hw router
>> + not sure but I could probably configure vrrp redundancy
>> - still single point of failure like in first case
>>
>>   * any other approach? Could ovn help here somehow?
>>
>> * Install all hosts and HE with public addresses :-)
>>   + access to all hosts directly
>>   - 3 node HCI cluster uses 4 public ip addressess
>>
>> Thanks for your opinions
>>
>> Cheers,
>>
>> Jiri
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LIFQQHTFVTS6KICR5MTRPGO5CH7QDLK7/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z232GYXPCKDAH2FCYSJQSDTD7GL6CUT7/


[ovirt-users] Re: basic infra and glusterfs sizing question

2020-05-29 Thread Jayme
Regarding Gluster question. The volumes would be provisioned with LVM on
the same block device. I believe 100Gb is recommended for the engine
volume. The other volumes such as data would be created on another logical
volume and you can use up the rest of the available space there. Ex. 100gb
engine, 500Gb data and 400Gb vmstore.

Data domains are basically the same now, in the past there used to be
different domain types such as ISO domains which are deprecated. You don't
really need any more than engine volume and data volume.  You could have a
volume for storing ISOs if you wanted to. You could have a separate volume
for OS disks and another volume for data disks which would give you more
flexibility for backups (so that you could backup data disks but not OS for
example).

On Fri, May 29, 2020 at 10:29 AM Jiří Sléžka  wrote:

> Hello,
>
> I am just curious if basic gluster HCI layout which is suggested in
> cockpit has some deeper meaning.
>
> There are suggested 3 volumes
>
> * engine - it is clear, it is the volume where engine vm is running.
> When this vm is 51GB big how small could this volume be? I have 1TB SSD
> storage and I would like utilize it as much as possible. Could I create
> this volume as small as this vm is? Is it safe for example for future
> upgrades?
>
> * vmstore - it make sense it is a space for all other vms running in
> oVirt. Right?
>
> * data - which purpose has this volume? other data like for example
> ISOs? Direct disks?
>
> Another infra question... or maybe request for comment
>
> I have small amount of public ipv4 addresses in my housing (but I have
> own switches there so I can create vlans and separate internal traffic).
> I can access only these public ipv4 addresses directly. I would like to
> conserve these addressess as much as possible so what is the best
> approach in your opinion?
>
> * Install all hosts and HE with management network on private addressess
>
>   * have small router (hw appliance with for example LEDE) which will
> utilize one ipv4 address and will do NAT and vpn for accessing my
> internals vlans.
> + looks like simple approach to me
> - single point of failure in this router (not really - just in case
> oVirt is badly broken and I need to access internal vlans to recover it)
>
>   * have this router as virtual appliance inside oVirt (something like
> pfSense for example)
> + no need hw router
> + not sure but I could probably configure vrrp redundancy
> - still single point of failure like in first case
>
>   * any other approach? Could ovn help here somehow?
>
> * Install all hosts and HE with public addresses :-)
>   + access to all hosts directly
>   - 3 node HCI cluster uses 4 public ip addressess
>
> Thanks for your opinions
>
> Cheers,
>
> Jiri
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LIFQQHTFVTS6KICR5MTRPGO5CH7QDLK7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FINP3OVMABC7VG4OY7TINSK4OMLHCBL2/


[ovirt-users] Re: ovirt-websocket-proxy errors when trying noVNC

2020-05-28 Thread Jayme
Here is the bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1832210

On Thu, May 28, 2020 at 8:23 AM Jayme  wrote:

> If it’s the issue I’m thinking of it’s because Apple Mojave started
> rejecting carts that have a validity date shorter than a certain period of
> time which ovirt ca does not follow. I posted another message on this group
> about it a little while ago and I think a bug report was made.
>
> The only way I can get novnc to work in Mac is by using Firefox and making
> sure the ca is imported and trusted by Firefox. I cannot get it to work
> with safari or chrome.
>
> On Thu, May 28, 2020 at 8:08 AM Louis Bohm  wrote:
>
>> So as I said before I added the CA cert to my MAC (and I can see it in
>> the MAC’s Keychain).  But its still not working.  For humor I will try
>> adding the CA to my Windows VM and see if that produces a different result.
>>
>> Louis
>> -<<—->>-
>> Louis Bohm
>> louisb...@gmail.com
>>
>>
>> <https://www.youracclaim.com/badges/f11e0d65-21ad-4458-895b-2c5b5cb11134/public_url>
>>
>> <https://www.youracclaim.com/badges/f11e0d65-21ad-4458-895b-2c5b5cb11134/public_url>
>>
>> On May 27, 2020, at 11:01 AM, Scott Dickerson 
>> wrote:
>>
>>
>> On Wed, May 27, 2020 at 7:42 AM Louis Bohm  wrote:
>>
>>> OS: Oracle Linux 7.8 (unbreakable kernel)
>>> Using Oracle Linux Virtualization Manager: Software
>>> Version:4.3.6.6-1.0.9.el7
>>>
>>> Since I am running all of it on one physical machine I opted to install
>>> the ovirt-engine using the accept defaults option.
>>>
>>> When I try to start a noVNC console I see this in the messages file:
>>>
>>> May 26 16:49:12 lfg-kvm saslpasswd2: Could not find keytab file:
>>> /etc/qemu/krb5.tab: No such file or directory
>>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>>> May 26 16:49:14 lfg-kvm journal: 2020-05-26 16:49:14,704-0400
>>> ovirt-websocket-proxy: INFO msg:824 handler exception: [SSL:
>>> SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown
>>> (_ssl.c:618)
>>> May 26 16:49:14 lfg-kvm ovirt-websocket-proxy.py:
>>> ovirt-websocket-proxy[14582] INFO msg:824 handler exception: [SSL:
>>> SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown
>>> (_ssl.c:618)
>>>
>>>
>>> I have checked the following:
>>>
>>> [root@lfg-kvm ~]#  engine-config -g WebSocketProxy
>>> WebSocketProxy: lfg-kvm.corp.lfg.com:6100 version: general
>>> [root@lfg-kvm ~]# engine-config -g SpiceProxyDefault
>>> SpiceProxyDefault: http://lfg-kvm.corp.lfg.com:6100 version: general
>>>
>>>
>>> This is a brand new install.
>>>
>>> I also am unable to get a VNC console up and running.  I have tried with
>>> an Ubuntu VM running on my MAC where I installed virt-manager.  The viewer
>>> comes up for a second says it cannot connect and then shutsdown.
>>>
>>>
>> If you're only using noVNC, then you need to make sure you import the CA
>> Cert and trust it in your browser.  There is no way to interactively accept
>> the self-signed cert from the engine when noVNC connects via the websocket
>> proxy.
>>
>>
>>> Anyone have any clue?
>>> -<<—->>-
>>> Louis Bohm
>>> louisb...@gmail.com
>>>
>>> <https://www.youracclaim.com/badges/f11e0d65-21ad-4458-895b-2c5b5cb11134/public_url>
>>>
>>> <https://www.youracclaim.com/badges/f11e0d65-21ad-4458-895b-2c5b5cb11134/public_url>
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U66GSTI4QJSGPM6LUVF2WC2UW5JQCNCX/
>>>
>

[ovirt-users] Re: ovirt-websocket-proxy errors when trying noVNC

2020-05-28 Thread Jayme
If it’s the issue I’m thinking of it’s because Apple Mojave started
rejecting carts that have a validity date shorter than a certain period of
time which ovirt ca does not follow. I posted another message on this group
about it a little while ago and I think a bug report was made.

The only way I can get novnc to work in Mac is by using Firefox and making
sure the ca is imported and trusted by Firefox. I cannot get it to work
with safari or chrome.

On Thu, May 28, 2020 at 8:08 AM Louis Bohm  wrote:

> So as I said before I added the CA cert to my MAC (and I can see it in the
> MAC’s Keychain).  But its still not working.  For humor I will try adding
> the CA to my Windows VM and see if that produces a different result.
>
> Louis
> -<<—->>-
> Louis Bohm
> louisb...@gmail.com
>
>
> 
>
> 
>
> On May 27, 2020, at 11:01 AM, Scott Dickerson  wrote:
>
>
> On Wed, May 27, 2020 at 7:42 AM Louis Bohm  wrote:
>
>> OS: Oracle Linux 7.8 (unbreakable kernel)
>> Using Oracle Linux Virtualization Manager: Software
>> Version:4.3.6.6-1.0.9.el7
>>
>> Since I am running all of it on one physical machine I opted to install
>> the ovirt-engine using the accept defaults option.
>>
>> When I try to start a noVNC console I see this in the messages file:
>>
>> May 26 16:49:12 lfg-kvm saslpasswd2: Could not find keytab file:
>> /etc/qemu/krb5.tab: No such file or directory
>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>> May 26 16:49:14 lfg-kvm journal: 2020-05-26 16:49:14,704-0400
>> ovirt-websocket-proxy: INFO msg:824 handler exception: [SSL:
>> SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown
>> (_ssl.c:618)
>> May 26 16:49:14 lfg-kvm ovirt-websocket-proxy.py:
>> ovirt-websocket-proxy[14582] INFO msg:824 handler exception: [SSL:
>> SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown
>> (_ssl.c:618)
>>
>>
>> I have checked the following:
>>
>> [root@lfg-kvm ~]#  engine-config -g WebSocketProxy
>> WebSocketProxy: lfg-kvm.corp.lfg.com:6100 version: general
>> [root@lfg-kvm ~]# engine-config -g SpiceProxyDefault
>> SpiceProxyDefault: http://lfg-kvm.corp.lfg.com:6100 version: general
>>
>>
>> This is a brand new install.
>>
>> I also am unable to get a VNC console up and running.  I have tried with
>> an Ubuntu VM running on my MAC where I installed virt-manager.  The viewer
>> comes up for a second says it cannot connect and then shutsdown.
>>
>>
> If you're only using noVNC, then you need to make sure you import the CA
> Cert and trust it in your browser.  There is no way to interactively accept
> the self-signed cert from the engine when noVNC connects via the websocket
> proxy.
>
>
>> Anyone have any clue?
>> -<<—->>-
>> Louis Bohm
>> louisb...@gmail.com
>>
>> 
>>
>> 
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U66GSTI4QJSGPM6LUVF2WC2UW5JQCNCX/
>>
>
>
> --
> Scott Dickerson
> Senior Software Engineer
> RHV-M Engineering - UX Team
> Red Hat, Inc
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WLZEDVEV5E4XTEM4Y6M4W3VJ4ODSISUS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F5PGDWYJM7IUAP67KGSHNKU377QEI3Q4/


[ovirt-users] Re: ovirt-node-ng-installer-4.4.0-2020051507.el8.iso does not support PREC5 raid controller ?

2020-05-15 Thread Jayme
This is likely due to centos8 not node image in particular. Centos8 dropped
support for many lsi raid controllers including older perc controllers.
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/hardware-enablement_considerations-in-adopting-rhel-8#removed-hardware-support_hardware-enablement

It is possible to load drivers during install. I have not done it with node
but I know it’s possible with regular centos8 install.



On Fri, May 15, 2020 at 8:04 PM  wrote:

> Trying to use my dell 2850 as ovirt node the install does not show the
> raid0 disk pair that ovirt-node-4.3.9 was able to use as install
> destination.
> The Installer shows no disk at all in system it has 6 seen by 4.3.9.
>
> Thanks Bryan
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDSVUZSESOXEFJNPHOXUH4HOOWRIRSB4/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YXSXVT23KMEHLUT2F6ESWIETRODWQPS2/


[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-04-28 Thread Jayme
Has the drive been used before, it might have existing partition/filesystem
on it? If you are sure it's fine to overwrite try running wipefs -a
/dev/sdb on all hosts. Also make sure there aren't any filters setup in
lvm.conf (there shouldn't be on fresh install, but worth checking).

On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq  wrote:

> Hi,
>
> I'm running the gluster deployment flow and am trying to use a second
> drive as the gluster volume.  It's /dev/sdb on each node and I'm using the
> JBOD mode.
>
> I'm seeing the following gluster ansible task fail and a google search
> doesn't bring up much.
>
> TASK [gluster.infra/roles/backend_setup : Create volume groups]
> 
>
> failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname':
> u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item",
> "changed": false, "err": "  Couldn't find device with uuid
> Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n  Couldn't find device with uuid
> tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n  Couldn't find device with uuid
> RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n  Couldn't find device with uuid
> lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n  Device /dev/sdb excluded by a
> filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
> "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUFJ56FGJI3TYWT6NOLAZ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VTE5EPSGAAMXRLFQ75CHDW7MMPO5FGGC/


[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Jayme
Oh and also gluster interface should not be set as default route either.

On Tue, Apr 28, 2020 at 7:19 PM Jayme  wrote:

> On gluster interface try setting gateway to 10.0.1.1
>
> If that doesn’t work let us know where the process is failing currently
> and with what errors etc.
>
> On Tue, Apr 28, 2020 at 6:54 PM Shareef Jalloq 
> wrote:
>
>> Thanks.  I have the DNS but must have my interface config wrong.  On my
>> first node I have two interfaces in use, em1 for the management interface
>> and p1p1 for the Gluster interface.
>>
>> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-em1
>>
>> TYPE=Ethernet
>>
>> PROXY_METHOD=none
>>
>> BROWSER_ONLY=no
>>
>> BOOTPROTO=none
>>
>> DEFROUTE=yes
>>
>> IPV4_FAILURE_FATAL=no
>>
>> IPV6INIT=no
>>
>> IPV6_AUTOCONF=yes
>>
>> IPV6_DEFROUTE=yes
>>
>> IPV6_FAILURE_FATAL=no
>>
>> IPV6_ADDR_GEN_MODE=stable-privacy
>>
>> NAME=em1
>>
>> UUID=724cddb2-8ce9-43ea-8c0e-e1aff19e72cc
>>
>> DEVICE=em1
>>
>> ONBOOT=yes
>>
>> IPADDR=10.0.0.31
>>
>> PREFIX=24
>>
>> GATEWAY=10.0.0.1
>>
>> DNS1=10.0.0.1
>>
>>
>> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-p1p1
>>
>> TYPE=Ethernet
>>
>> PROXY_METHOD=none
>>
>> BROWSER_ONLY=no
>>
>> BOOTPROTO=none
>>
>> DEFROUTE=yes
>>
>> IPV4_FAILURE_FATAL=no
>>
>> IPV6INIT=no
>>
>> IPV6_AUTOCONF=yes
>>
>> IPV6_DEFROUTE=yes
>>
>> IPV6_FAILURE_FATAL=no
>>
>> IPV6_ADDR_GEN_MODE=stable-privacy
>>
>> NAME=p1p1
>>
>> UUID=1adb45d3-4dac-4bac-bb19-257fb9c7016b
>>
>> DEVICE=p1p1
>>
>> ONBOOT=yes
>>
>> IPADDR=10.0.1.31
>>
>> PREFIX=24
>>
>> GATEWAY=10.0.0.1
>>
>> DNS1=10.0.0.1
>>
>> On Tue, Apr 28, 2020 at 10:47 PM Jayme  wrote:
>>
>>>  You should use host names for gluster like gluster1.hostname.com that
>>> resolve to the ip chosen for gluster.
>>>
>>> For my env I have something like this:
>>>
>>> Server0:
>>> Host0.example.com 10.10.0.100
>>> Gluster0.example.com 10.0.1.100
>>>
>>> Same thing for other two severs except hostnames and ips of course.
>>>
>>> Use the gluster hostnames for the first step then the sever hostnames
>>> for the others.
>>>
>>> I made sure I could ssh to and from both hostX and glusterX on each
>>> server.
>>>
>>> On Tue, Apr 28, 2020 at 6:34 PM Shareef Jalloq 
>>> wrote:
>>>
>>>> Perhaps it's me, but these two documents seem to disagree on what
>>>> hostnames to use when setting up.  Can someone clarify.
>>>>
>>>> The main documentation here:
>>>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>>>>  talks
>>>> about copying the SSH keys to the gluster host address but the old blog
>>>> post with an outdated interface here:
>>>> https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>>>>  uses
>>>> the node address.
>>>>
>>>> In the first step of the hyperconverged Gluster wizard, when it asks
>>>> for "Gluster network address", is this wanting the host IP or the IP of the
>>>> Gluster interface?
>>>>
>>>> On Tue, Apr 28, 2020 at 10:24 PM Shareef Jalloq 
>>>> wrote:
>>>>
>>>>> OK, thanks both, that seems to have fixed that issue.
>>>>>
>>>>> Is there any other config I need to do because the next step in the
>>>>> deployment guide of copying SSH keys seems to take over a minute just to
>>>>> prompt for a password.  Something smells here.
>>>>>
>>>>> On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:
>>>>>
>>>>>> You should be using a different subnet for each. I.e. 10.0.0.30 and
>>>>>> 10.0.1.30 for example
>>>>>>
>>>>>> On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I'm in the process of trying to set up an HCI 3 node cluster in my
>>>>>>> homelab to better understand the Gluster setup and have failed at the 
>>>&g

[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Jayme
On gluster interface try setting gateway to 10.0.1.1

If that doesn’t work let us know where the process is failing currently and
with what errors etc.

On Tue, Apr 28, 2020 at 6:54 PM Shareef Jalloq  wrote:

> Thanks.  I have the DNS but must have my interface config wrong.  On my
> first node I have two interfaces in use, em1 for the management interface
> and p1p1 for the Gluster interface.
>
> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-em1
>
> TYPE=Ethernet
>
> PROXY_METHOD=none
>
> BROWSER_ONLY=no
>
> BOOTPROTO=none
>
> DEFROUTE=yes
>
> IPV4_FAILURE_FATAL=no
>
> IPV6INIT=no
>
> IPV6_AUTOCONF=yes
>
> IPV6_DEFROUTE=yes
>
> IPV6_FAILURE_FATAL=no
>
> IPV6_ADDR_GEN_MODE=stable-privacy
>
> NAME=em1
>
> UUID=724cddb2-8ce9-43ea-8c0e-e1aff19e72cc
>
> DEVICE=em1
>
> ONBOOT=yes
>
> IPADDR=10.0.0.31
>
> PREFIX=24
>
> GATEWAY=10.0.0.1
>
> DNS1=10.0.0.1
>
>
> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-p1p1
>
> TYPE=Ethernet
>
> PROXY_METHOD=none
>
> BROWSER_ONLY=no
>
> BOOTPROTO=none
>
> DEFROUTE=yes
>
> IPV4_FAILURE_FATAL=no
>
> IPV6INIT=no
>
> IPV6_AUTOCONF=yes
>
> IPV6_DEFROUTE=yes
>
> IPV6_FAILURE_FATAL=no
>
> IPV6_ADDR_GEN_MODE=stable-privacy
>
> NAME=p1p1
>
> UUID=1adb45d3-4dac-4bac-bb19-257fb9c7016b
>
> DEVICE=p1p1
>
> ONBOOT=yes
>
> IPADDR=10.0.1.31
>
> PREFIX=24
>
> GATEWAY=10.0.0.1
>
> DNS1=10.0.0.1
>
> On Tue, Apr 28, 2020 at 10:47 PM Jayme  wrote:
>
>>  You should use host names for gluster like gluster1.hostname.com that
>> resolve to the ip chosen for gluster.
>>
>> For my env I have something like this:
>>
>> Server0:
>> Host0.example.com 10.10.0.100
>> Gluster0.example.com 10.0.1.100
>>
>> Same thing for other two severs except hostnames and ips of course.
>>
>> Use the gluster hostnames for the first step then the sever hostnames for
>> the others.
>>
>> I made sure I could ssh to and from both hostX and glusterX on each
>> server.
>>
>> On Tue, Apr 28, 2020 at 6:34 PM Shareef Jalloq 
>> wrote:
>>
>>> Perhaps it's me, but these two documents seem to disagree on what
>>> hostnames to use when setting up.  Can someone clarify.
>>>
>>> The main documentation here:
>>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>>>  talks
>>> about copying the SSH keys to the gluster host address but the old blog
>>> post with an outdated interface here:
>>> https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>>>  uses
>>> the node address.
>>>
>>> In the first step of the hyperconverged Gluster wizard, when it asks for
>>> "Gluster network address", is this wanting the host IP or the IP of the
>>> Gluster interface?
>>>
>>> On Tue, Apr 28, 2020 at 10:24 PM Shareef Jalloq 
>>> wrote:
>>>
>>>> OK, thanks both, that seems to have fixed that issue.
>>>>
>>>> Is there any other config I need to do because the next step in the
>>>> deployment guide of copying SSH keys seems to take over a minute just to
>>>> prompt for a password.  Something smells here.
>>>>
>>>> On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:
>>>>
>>>>> You should be using a different subnet for each. I.e. 10.0.0.30 and
>>>>> 10.0.1.30 for example
>>>>>
>>>>> On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq 
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I'm in the process of trying to set up an HCI 3 node cluster in my
>>>>>> homelab to better understand the Gluster setup and have failed at the 
>>>>>> first
>>>>>> hurdle. I've set up the node interfaces on the built in NIC and am using 
>>>>>> a
>>>>>> PCI NIC for the Gluster traffic - at the moment this is 1Gb until I can
>>>>>> upgrade - and I've assigned a static IP to both interfaces and also have
>>>>>> both entries in my DNS.
>>>>>>
>>>>>> From any of the three nodes, I can ping the gateway, the other nodes,
>>>>>> any external IP but I can't ping any of the Gluster NICs.  What have I
>>>>>> forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is 
>>>>>>

[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Jayme
 You should use host names for gluster like gluster1.hostname.com that
resolve to the ip chosen for gluster.

For my env I have something like this:

Server0:
Host0.example.com 10.10.0.100
Gluster0.example.com 10.0.1.100

Same thing for other two severs except hostnames and ips of course.

Use the gluster hostnames for the first step then the sever hostnames for
the others.

I made sure I could ssh to and from both hostX and glusterX on each server.

On Tue, Apr 28, 2020 at 6:34 PM Shareef Jalloq  wrote:

> Perhaps it's me, but these two documents seem to disagree on what
> hostnames to use when setting up.  Can someone clarify.
>
> The main documentation here:
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>  talks
> about copying the SSH keys to the gluster host address but the old blog
> post with an outdated interface here:
> https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>  uses
> the node address.
>
> In the first step of the hyperconverged Gluster wizard, when it asks for
> "Gluster network address", is this wanting the host IP or the IP of the
> Gluster interface?
>
> On Tue, Apr 28, 2020 at 10:24 PM Shareef Jalloq 
> wrote:
>
>> OK, thanks both, that seems to have fixed that issue.
>>
>> Is there any other config I need to do because the next step in the
>> deployment guide of copying SSH keys seems to take over a minute just to
>> prompt for a password.  Something smells here.
>>
>> On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:
>>
>>> You should be using a different subnet for each. I.e. 10.0.0.30 and
>>> 10.0.1.30 for example
>>>
>>> On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I'm in the process of trying to set up an HCI 3 node cluster in my
>>>> homelab to better understand the Gluster setup and have failed at the first
>>>> hurdle. I've set up the node interfaces on the built in NIC and am using a
>>>> PCI NIC for the Gluster traffic - at the moment this is 1Gb until I can
>>>> upgrade - and I've assigned a static IP to both interfaces and also have
>>>> both entries in my DNS.
>>>>
>>>> From any of the three nodes, I can ping the gateway, the other nodes,
>>>> any external IP but I can't ping any of the Gluster NICs.  What have I
>>>> forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is the
>>>> motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
>>>> /etc/sysconfig/network-scripts/ifcfg- scripts are identical aside from
>>>> IPADDR, NAME, DEVICE and UUID fields.
>>>>
>>>> Thanks, Shareef.
>>>>
>>>> [root@ovirt-node-00 ~]# ip addr show
>>>>
>>>>
>>>> 2: p1p1:  mtu 1500 qdisc mq state UP
>>>> group default qlen 1000
>>>>
>>>> link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff
>>>>
>>>> inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1
>>>>
>>>>valid_lft forever preferred_lft forever
>>>>
>>>> inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global
>>>> mngtmpaddr dynamic
>>>>
>>>>valid_lft 7054sec preferred_lft 7054sec
>>>>
>>>> inet6 fe80::a236:9fff:fe1f:f978/64 scope link
>>>>
>>>>valid_lft forever preferred_lft forever
>>>>
>>>>
>>>> 4: em1:  mtu 1500 qdisc pfifo_fast
>>>> state UP group default qlen 1000
>>>>
>>>> link/ether 98:90:96:a1:16:ad brd ff:ff:ff:ff:ff:ff
>>>>
>>>> inet 10.0.0.31/24 brd 10.0.0.255 scope global noprefixroute em1
>>>>
>>>>valid_lft forever preferred_lft forever
>>>>
>>>> inet6 fd4d:e9e3:6f5:1:9a90:96ff:fea1:16ad/64 scope global
>>>> mngtmpaddr dynamic
>>>>
>>>>valid_lft 7054sec preferred_lft 7054sec
>>>>
>>>> inet6 fe80::9a90:96ff:fea1:16ad/64 scope link
>>>>
>>>>valid_lft forever preferred_lft forever
>>>>
>>>>
>>>> ___
>>>> Users mailing list -- users@ovirt.org
>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7UESGZ6MJXPVKN2UZJTO4OZYGOQIWHE/
>>>>
>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KVMZSD7YPPSCFO6RKTRKA2BAVJGAFDRE/


[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Jayme
You should be using a different subnet for each. I.e. 10.0.0.30 and
10.0.1.30 for example

On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq  wrote:

> Hi,
>
> I'm in the process of trying to set up an HCI 3 node cluster in my homelab
> to better understand the Gluster setup and have failed at the first hurdle.
> I've set up the node interfaces on the built in NIC and am using a PCI NIC
> for the Gluster traffic - at the moment this is 1Gb until I can upgrade -
> and I've assigned a static IP to both interfaces and also have both entries
> in my DNS.
>
> From any of the three nodes, I can ping the gateway, the other nodes, any
> external IP but I can't ping any of the Gluster NICs.  What have I
> forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is the
> motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
> /etc/sysconfig/network-scripts/ifcfg- scripts are identical aside from
> IPADDR, NAME, DEVICE and UUID fields.
>
> Thanks, Shareef.
>
> [root@ovirt-node-00 ~]# ip addr show
>
>
> 2: p1p1:  mtu 1500 qdisc mq state UP
> group default qlen 1000
>
> link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff
>
> inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1
>
>valid_lft forever preferred_lft forever
>
> inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global mngtmpaddr
> dynamic
>
>valid_lft 7054sec preferred_lft 7054sec
>
> inet6 fe80::a236:9fff:fe1f:f978/64 scope link
>
>valid_lft forever preferred_lft forever
>
>
> 4: em1:  mtu 1500 qdisc pfifo_fast state
> UP group default qlen 1000
>
> link/ether 98:90:96:a1:16:ad brd ff:ff:ff:ff:ff:ff
>
> inet 10.0.0.31/24 brd 10.0.0.255 scope global noprefixroute em1
>
>valid_lft forever preferred_lft forever
>
> inet6 fd4d:e9e3:6f5:1:9a90:96ff:fea1:16ad/64 scope global mngtmpaddr
> dynamic
>
>valid_lft 7054sec preferred_lft 7054sec
>
> inet6 fe80::9a90:96ff:fea1:16ad/64 scope link
>
>valid_lft forever preferred_lft forever
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7UESGZ6MJXPVKN2UZJTO4OZYGOQIWHE/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I2UVP4THQIODVBRN46IHDYYDIWBFLG4E/


[ovirt-users] Re: VM disk I/O

2020-04-21 Thread Jayme
What is the vm optimizer you speak of?

Have you tried the high performance vm profile? When set it will prompt you
to make additional manual changes such as configuring numa and hugepages
etc



On Tue, Apr 21, 2020 at 8:52 AM  wrote:

> On oVirt 4.3. i installed w10_64 with q35 cpu.
> i've used vm optimizer for better performans for end-users. it seams good.
> But i need more performance guidelines.
> Ex.
> Our system has FC storage, is tere any options for better read/write
> performans, Hugepage, write through
> Like this, if you have any suggestions, could you share
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KH6ULJYIRBPDNEAR5CASDY2IOT3ARVHA/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VCWGMW42Z77FUQ7L27IFAMRJHB2P5HUS/


[ovirt-users] Re: VM's in status unknown

2020-04-16 Thread Jayme
Do you have the guest agent installed on the VMs?

On Thu, Apr 16, 2020 at 2:55 PM  wrote:

> Are you getting any errors in the engine log or
> /var/log/libvirt/qemu/.log?
> I have Windows 10 and haven't experienced that. You can't shut it down in
> the UI? Even after you try to shut it down inside Windows?
> I will assume you have the latest guest tools installed.
>
> Eric Evans
> Digital Data Services LLC.
> 304.660.9080
>
>
> -Original Message-
> From: kim.karga...@noroff.no 
> Sent: Thursday, April 16, 2020 8:23 AM
> To: users@ovirt.org
> Subject: [ovirt-users] VM's in status unknown
>
> Hi,
>
> We have a few Windows 10 VM's running on our ovirt 4.3, where when you
> shutdown the VM from within Windows, that VM does not shut down but gets a
> status of unknown in ovirt and one cannot do anything to the machines
> within the web gui. This seems to specfically be related to that Windows 10
> template that we have created. Any ideas? Also, any ideas on how one can
> shut these machines down?
>
> Thanks
>
> Kim
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:
> https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UM26TABUTG373QVXEI4UJN3EAKANLWHL/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/M4LITHLIZXJDTZACENZX4NLO7LSB6VAM/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/77WG7WMSTFE7XBJN6EU3E3AZMV36TZ6B/


[ovirt-users] Re: How to load VMs after importing a domain

2020-04-16 Thread Jayme
In oVirt admin go to Storage > Domains. Click your storage domain. Click
"Virtual Machines" tab. You should see a list of VMs on that storage
domain. Click one or highlight multiple then click import.

On Thu, Apr 16, 2020 at 2:34 PM  wrote:

> If you click on the 3 dots in the vm portal, there is an import there,
> then chose what you import from.
>
> See attached screenshot.
>
> Is this what your looking for?
>
>
>
> Eric Evans
>
> Digital Data Services LLC.
>
> 304.660.9080
>
>
>
> *From:* Shareef Jalloq 
> *Sent:* Thursday, April 16, 2020 10:11 AM
> *To:* Ovirt Users 
> *Subject:* [ovirt-users] How to load VMs after importing a domain
>
>
>
> I've followed the online instructions on importing a pre-configured domain
> into a new data centre but I can't see how to import the VMs.  The
> documentation just says, "You can now import virtual machines and templates
> from the storage domain to the data center." with no other info.
>
>
>
> What do I need to do in order to get my VMs up and running?
>
>
>
> Cheers, Shareef.
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ECF6LZMYPEOTZRJ4UTGGLBMJFVWNLFSR/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXN6VZDQL2E3IMSZ4KPN6NGA63PH73WN/


[ovirt-users] Re: Can't deploy engine vm with ovirt-hosted-engine-setup

2020-04-14 Thread Jayme
The error suggests a problem with ansible. What packages are you using?

On Tue, Apr 14, 2020 at 1:51 AM Gabriel Bueno  wrote:

> Does anyone have any clue that it may be happening?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UML2K4XNGD6JBTQEYDYQS2ZQABOC6X3T/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PALXTTJXGPL5GEDDHLZH3AO5RB53ESZB/


[ovirt-users] engine certificate problems in MacOS Catalina

2020-04-13 Thread Jayme
I recently setup a new ovirt environment using latest 4.3.9 installer. I
can't seem to get the novnc client to work for the life of me in safari or
chrome on MacOS catalina.

I have downloaded the CA from the login page and imported it into keychain
and made sure it was fully trusted. In both system and login keychains.

Looking at this: https://support.apple.com/en-us/HT210176 seems to suggest
that the certificate may be invalid if the issue date is longer than 825
days. The certificate generated by ovirt installer seems to be valid for 5
years. I'm not sure if this is the issue or something else is wrong but if
cert length is a problem is there a way for me to re-regenerate a new
certificate with a shorter issue period?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNL6NSW6GP3IR7GECYE6DNPJA6H2X3RB/


[ovirt-users] snapshot options on remote NFS storage

2020-04-03 Thread Jayme
Was wondering if there are any guides or if anyone could share their
storage configuration details for NFS. If using LVM is it safe to snapshot
volumes with running VM images for backup purposes?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XWTO4BVMIJRPF7EMEGOK2XTZZU6PIPYK/


[ovirt-users] Re: Speed Issues

2020-03-27 Thread Jayme
Christian,

I've been following along with interest, as I've also been trying
everything I can to improve gluster performance in my HCI cluster. My issue
is mostly latency related and my workloads are typically small file
operations which have been especially challenging.

Couple of things

1. About the MTU, did you also enable jumbo frames at switch level (if
applicable)? I have jumbo frames enabled but honestly didn't see much of an
impact from doing so.

2. About libgfapi. It's actually quite simple to enable it (at least if you
want to do some testing). It can be enabled on the hosted engine using
engine-config i.e. *engine-config -s LibgfApiSupported=true -- *from my
experience you can do this while VMs are running and they won't pick up the
new config under powered off/restarted. So you are able to test it out on
one VM. Again, as and some others have mentioned this is not a default
option in oVirt because there are known bugs with the libgfapi
implementation. Some others have worked around these bugs in various ways
but like you, I am not willing to do so in a production environment. Still,
I think it's very much worth doing some tests on a VM with libgfapi enabled
compared to default fuse mount.



On Fri, Mar 27, 2020 at 7:44 AM Christian Reiss 
wrote:

> Hey,
>
> thanks for writing. If I go for dont choose local my speed drops
> dramatically (halving). Speed between the hosts is okay (tested) but for
> some odd reason the mtu is at 1500 still. I was sure I set it to
> jumbo/9k. Oh well.
>
> Not during runtime. I can hear the gluster scream if the network dies
> for a second :)
>
> -Chris.
>
> On 24/03/2020 18:33, Darrell Budic wrote:
>  >
>  > cluster.choose-local: false
>  > cluster.read-hash-mode: 3
>  >
>  > if you have separate servers or nodes with are not HCI to allow it
>  > spread reads over multiple nodes.
> --
>   Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
> supp...@alpha-labs.net   \ /Campaign
>   X   against HTML
>   WEB alpha-labs.net / \   in eMails
>
>   GPG Retrieval https://gpg.christian-reiss.de
>   GPG ID ABCD43C5, 0x44E29126ABCD43C5
>   GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5
>
>   "It's better to reign in hell than to serve in heaven.",
>John Milton, Paradise lost.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QYS7RIHXYAYW7XTPFVZBUHNGPFQMYA7H/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5JBBWOM3KGQ3FPY2OCW7ZBD4EGFEGDTR/


[ovirt-users] Re: Speed Issues

2020-03-24 Thread Jayme
I strongly believe that FUSE mount is the real reason for poor performance
in HCI and these minor gluster and other tweaks won't satisfy most seeking
i/o performance. Enabling libgfapi is probably the best option. Redhat has
recently closed bug reports related to libgfapi citing won't fix and one
comment suggests that libgfapi was not showing good enough performance to
bother with which appears to contradict what many oVirt users are seeing.
It's confusing to me why libgfapi as a default option is not being given
any priority.

https://bugzilla.redhat.com/show_bug.cgi?id=1465810

"We do not plan to enable libgfapi for oVirt/RHV. We did not find enough
performance improvement justification for it"

On Tue, Mar 24, 2020 at 3:34 PM Alex McWhirter  wrote:

> Red hat also recommends a shard size of 512mb, it's actually the only
> shard size they support. Also check the chunk size on the LVM thin pools
> running the bricks, should be at least 2mb. Note that changing the shard
> size only applies to new VM disks after the change. Changing the chunk
> size requires making a new brick.
>
> libgfapi brings a huge performance boost, in my opinion its almost a
> necessity unless you have a ton of extra disk speed / network
> throughput. Just be aware of the caveats.
>
> On 2020-03-24 14:12, Strahil Nikolov wrote:
> > On March 24, 2020 7:33:16 PM GMT+02:00, Darrell Budic
> >  wrote:
> >> Christian,
> >>
> >> Adding on to Stahil’s notes, make sure you’re using jumbo MTUs on
> >> servers and client host nodes. Making sure you’re using appropriate
> >> disk schedulers on hosts and VMs is important, worth double checking
> >> that it’s doing what you think it is. If you are only HCI, gluster’s
> >> choose-local on is a good thing, but try
> >>
> >> cluster.choose-local: false
> >> cluster.read-hash-mode: 3
> >>
> >> if you have separate servers or nodes with are not HCI to allow it
> >> spread reads over multiple nodes.
> >>
> >> Test out these settings if you have lots of RAM and cores on your
> >> servers, they work well for me with 20 cores and 64GB ram on my
> >> servers
> >> with my load:
> >>
> >> performance.io-thread-count: 64
> >> performance.low-prio-threads: 32
> >>
> >> these are worth testing for your workload.
> >>
> >> If you’re running VMs with these, test out libglapi connections, it’s
> >> significantly better for IO latency than plain fuse mounts. If you can
> >> tolerate the issues, the biggest one at the moment being you can’t
> >> take
> >> snapshots of the VMs with it enabled as of March.
> >>
> >> If you have tuned available, I use throughput-performance on my
> >> servers
> >> and guest-host on my vm nodes, throughput-performance on some HCI
> >> ones.
> >>
> >>
> >> I’d test with out the fips-rchecksum setting, that may be creating
> >> extra work for your servers.
> >>
> >> If you mounted individual bricks, check that you disabled barriers on
> >> them at mount if appropriate.
> >>
> >> Hope it helps,
> >>
> >>  -Darrell
> >>
> >>> On Mar 24, 2020, at 6:23 AM, Strahil Nikolov 
> >> wrote:
> >>>
> >>> On March 24, 2020 11:20:10 AM GMT+02:00, Christian Reiss
> >>  wrote:
>  Hey Strahil,
> 
>  seems you're the go-to-guy with pretty much all my issues. I thank
> >> you
>  for this and your continued support. Much appreciated.
> 
> 
>  200mb/reads however seems like a broken config or malfunctioning
>  gluster
>  than requiring performance tweaks. I enabled profiling so I have
> >> real
>  life data available. But seriously even without tweaks I would like
>  (need) 4 times those numbers, 800mb write speed is okay'ish, given
> >> the
>  fact that 10gbit backbone can be the limiting factor.
> 
>  We are running BigCouch/CouchDB Applications that really really need
>  IO.
>  Not in throughput but in response times. 200mb/s is just way off.
> 
>  It feels as gluster can/should do more, natively.
> 
>  -Chris.
> 
>  On 24/03/2020 06:17, Strahil Nikolov wrote:
> > Hey Chris,,
> >
> > You got some options.
> > 1. To speedup the reads in HCI - you can use the option :
> > cluster.choose-local: on
> > 2. You can adjust the server and client event-threads
> > 3. You can use NFS Ganesha (which connects to all servers via
>  libgfapi)  as a NFS Server.
> > In such case you have to use some clustering like ctdb or
> >> pacemaker.
> > Note:disable cluster.choose-local if you use this one
> > 4 You can try the built-in NFS , although it's deprecated (NFS
>  Ganesha is fully supported)
> > 5.  Create a gluster profile during the tests. I have seen numerous
>  improperly selected tests -> so test with real-world  workload.
>  Synthetic tests are not good.
> >
> > Best Regards,
> > Strahil Nikolov
> >>>
> >>> Hey Chris,
> >>>
> >>> What type is your VM ?
> >>> Try with 'High Performance' one (there is a  good RH documentation on
> >> that topic).
> >>>
> >>> 

[ovirt-users] Re: Artwork: 4.4 GA banners

2020-03-24 Thread Jayme
Hey Sandro,

Do you have more specific details or guidelines in regards to the graphics
you are looking for?

Thanks!

On Tue, Mar 24, 2020 at 1:27 PM Sandro Bonazzola 
wrote:

> Hi,
> in preparation of oVirt 4.4 GA it would be nice to have some graphics we
> can use for launching oVirt 4.4 GA on social media and oVirt website.
> If you don't have coding skills but you have marketing or design skills
> this is a good opportunity to contribute back to the project.
> Looking forward to your designs!
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> *
> *
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WQWKXCPQVII5SZX2AX2SGUYORDVG5KS6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XXQDEW5ONO6L7RFJY34EAZOHVRFP7WDD/


[ovirt-users] Re: Speed Issues

2020-03-23 Thread Jayme
I too struggle with speed issues in hci. Latency is a big problem with
writes for me especially when dealing with small file workloads. How are
you testing exactly?

Look into enabling libgfapi and try some comparisons with that. People have
been saying it’s much faster, but it’s not a default option and has a few
bugs. Redhat devs do not appear to be giving its implementation any
priority unfortunately.

I’ve been considering switching to nfs storage because I’m seeing much
better performance in testing with it. I have some nvme drives on the way
and am curious how they would perform in hci but I’m thinking the issue is
not a disk bottleneck (that appears very obvious in your case as well)



On Mon, Mar 23, 2020 at 6:44 PM Christian Reiss 
wrote:

> Hey folks,
>
> gluster related question. Having SSD in a RAID that can do 2 GB writes
> and Reads (actually above, but meh) in a 3-way HCI cluster connected
> with 10gbit connection things are pretty slow inside gluster.
>
> I have these settings:
>
> Options Reconfigured:
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.shd-max-threads: 8
> features.shard: on
> features.shard-block-size: 64MB
> server.event-threads: 8
> user.cifs: off
> cluster.shd-wait-qlength: 1
> cluster.locking-scheme: granular
> cluster.eager-lock: enable
> performance.low-prio-threads: 32
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.choose-local: true
> client.event-threads: 16
> performance.strict-o-direct: on
> network.remote-dio: enable
> performance.client-io-threads: on
> nfs.disable: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> cluster.readdir-optimize: on
> cluster.metadata-self-heal: on
> cluster.data-self-heal: on
> cluster.entry-self-heal: on
> cluster.data-self-heal-algorithm: full
> features.uss: enable
> features.show-snapshot-directory: on
> features.barrier: disable
> auto-delete: enable
> snap-activate-on-create: enable
>
> Writing inside the /gluster_bricks yields those 2GB/sec writes, Reading
> the same.
>
> Reading inside the /rhev/data-center/mnt/glusterSD/ dir reads go down to
> 366mb/sec while writes plummet to to 200mb/sec.
>
> Summed up: Writing into the SSD Raid in the lvm/xfs gluster brick
> directory is fast, writing into the mounted gluster dir is horribly slow.
>
> The above can be seen and repeated on all 3 servers. The network can do
> full 10gbit (tested with, among others: rsync, iperf3).
>
> Anyone with some idea on whats missing/ going on here?
>
> Thanks folks,
> as always stay safe and healthy!
>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OMAAERV4IUISYEWD4QP5OAM4DK4JTTLF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5FGWRA4X53LPH42FHWEEQ7HLTZJQUGOL/


[ovirt-users] Re: Gluster Settings

2020-03-19 Thread Jayme
It applies a profile for the virt group. You can get more info here:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/app-virt_profile

Or you can look at the file directly, it’s basically just a list of gluster
volume options to be applied. I can’t remember off the top of my head what
location the profiles are in but it shouldn’t be too difficult to find.

On Thu, Mar 19, 2020 at 7:45 AM Christian Reiss 
wrote:

> Yeah,
>
> That button scares me. What does it do, precisely?
>
> On 19/03/2020 11:18, Jayme wrote:
> > At the very least you should make sure to apply the gluster virt profile
> > to vm volumes. This can also be done using optimize for virt store in
> > the ovirt GUI
>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7L2M4N6LAQST7ZKFVQ4FWDSF3BHKC7YQ/


[ovirt-users] Re: Gluster Settings

2020-03-19 Thread Jayme
At the very least you should make sure to apply the gluster virt profile to
vm volumes. This can also be done using optimize for virt store in the
ovirt GUI

On Thu, Mar 19, 2020 at 6:54 AM Christian Reiss 
wrote:

> Hey folks,
>
> quick question. For running Gluster / oVirt I found several places, some
> outdated (ovirt docs), gluster Mailinglists, oVirt Mailinglists etc that
> recommend different things.
>
> Here is what I found out/configured:
>
> features.barrier: disable
> features.show-snapshot-directory: on
> features.uss: enable
> cluster.data-self-heal-algorithm: full
> cluster.entry-self-heal: on
> cluster.data-self-heal: on
> cluster.metadata-self-heal: on
> cluster.readdir-optimize: on
> transport.address-family: inet
> storage.fips-mode-rchecksum: on
> nfs.disable: on
> performance.client-io-threads: off
> network.remote-dio: off
> performance.strict-o-direct: on
> client.event-threads: 16
> cluster.choose-local: true
> snap-activate-on-create: enable
> auto-delete: enable
>
> Would you agree or change anything (usual vm workload).
>
> Thanks! o/
> And keep healthy.
>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ABTBEHQG7A3F45F7TS2EB3KAGVHGUC5N/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YM445LCMHEH6XPUJG4EOPEGTUQJI75LS/


[ovirt-users] adding new gluster volume

2020-03-17 Thread Jayme
What if any steps do I need to take prior to adding an additional gluster
volume to my HCI cluster using new storage devices via the oVirt gui? Will
the gui prepare the devices (xfs/lvm etc) or do I need to do that prior?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EXFXCKCFXISAZVUJRPB5UBIK6HTCMH3J/


[ovirt-users] Re: Can anyone explain what gluster 4k support is?

2020-03-17 Thread Jayme
Is there performance improvements using larger block size?

On Tue, Mar 17, 2020 at 5:29 AM Vojtech Juranek  wrote:

> On pondělí 16. března 2020 22:53:32 CET Strahil Nikolov wrote:
> > On March 16, 2020 11:08:16 PM GMT+02:00, Vojtech Juranek
>  wrote:
> > >On středa 11. března 2020 21:13:13 CET Jayme wrote:
> > >> I noticed Gluster 4k support mentioned in recent oVirt release notes.
> > >
> > >Can
> > >
> > >> anyone explain what this is about?
> > >
> > >before we supported only disks with block size 512 B. Now, we support
> > >also
> > >disks with 4 kB (aka 4k), for now only on Gluster SD. If you want to
> > >learn more about
> > >this feature, you can check slides [1] and I noticed that videos
> > >recording of
> > >Nir's talk he had on Fosdem is already available, so you can watch
> > >whole
> > >talk [2].
> > >
> > >Vojta
> > >
> > >[1]
> > >
> https://docs.google.com/presentation/d/1ClLMZ4XAb8CPhYOw6mNpv0va5JWcAgpyFbJ
> > >n-Bml_aY/ [2] https://video.fosdem.org/2020/H.1309/vai_ovirt_4k.webm
> >
> > Can I switch off VDO's 512byte emulation ?
>
> yes, now Gluster SD should work without 512 block size
> emulation___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LEG5DXY3A5HSABLDHQAO2M53G3SC4E2U/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7FVGBJQZSIETJRWEUTR4NC4YQGFZX7SH/


[ovirt-users] Re: Ansible playbook timeout

2020-03-15 Thread Jayme
This is all that should be needed, I've done so on my engine and it works
fine to set the timeout much higher. My guess is that you did not restart
the engine after changing the config.

On Sun, Mar 15, 2020 at 10:44 AM Barrett Richardson 
wrote:

> Version 4.2.8.2-1.0.9.el7
>
> Per the info near the bottom of
> /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.conf I should be
> able to create this file,
>
> /etc/ovirt-engine/engine.conf.d/99-ansible-playbook-timeout.conf
>
> and place in the file these contents,
>
> ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=150
>
> and extend the playbook timeout.  It doesn't work, still times out after
> 30 minutes.  Any suggested workarounds?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UMKQAVQQIN6XU6LL4ZZDEBZH5DWZDMH6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/24EZTZQ4WFIVEWPZV3NYTWNNIHJ7D67J/


[ovirt-users] Can anyone explain what gluster 4k support is?

2020-03-11 Thread Jayme
I noticed Gluster 4k support mentioned in recent oVirt release notes. Can
anyone explain what this is about?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BWVZDUUBJHJHIVI4UJT5GSWFU4HP4V5B/


[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-08 Thread Jayme
Ok, this is more strange.  The same dd test against my ssd os/boot drives
on oVirt node hosts using the same model drive (only smaller) and same h310
controller (only diff being the os/boot drives are in raid mirror and
gluster drives are passthrough) test completes in <2 seconds in /tmp of
host but takes ~45 seconds in /gluster_bricks/brick_whatever

Is there any explanation why there is such a vast difference between the
two tests?

example of one my mounts:

/dev/mapper/onn_orchard1-tmp /tmp ext4 defaults,discard 1 2
/dev/gluster_vg_sda/gluster_lv_prod_a /gluster_bricks/brick_a xfs
inode64,noatime,nodiratime 0 0

On Sun, Mar 8, 2020 at 12:23 PM Jayme  wrote:

> Strahil,
>
> I'm starting to think that my problem could be related to the use of perc
> H310 mini raid controllers in my oVirt hosts. The os/boot SSDs are raid
> mirror but gluster storage is SSDs in passthrough. I've read that the queue
> depth of h310 card is very low and can cause performance issues
> especially when used with flash devices.
>
> dd if=/dev/zero of=test4.img bs=512 count=5000 oflag=dsync on one of my
> hosts gluster bricks /gluster_bricks/brick_a for example takes 45 seconds
> to complete.
>
> I can perform the same operation in ~2 seconds on another server with a
> better raid controller, but with the same model ssd.
>
> I might look at seeing how I can swap out the h310's, unfortunately I
> think that may require me to wipe the gluster storage drives as with
> another controller I believe they'd need to be added as single raid 0
> arrays and would need to be rebuilt to do so.
>
> If I were to take one host down at a time is there a way that I can
> re-build the entire server including wiping the gluster disks and add the
> host back into the ovirt cluster and rebuild it along with the bricks? How
> would you recommend doing such a task if I needed to wipe gluster disks on
> each host ?
>
>
>
> On Sat, Mar 7, 2020 at 6:24 PM Jayme  wrote:
>
>> No worries at all about the length of the email, the details are highly
>> appreciated. You've given me lots to look into and consider.
>>
>>
>>
>> On Sat, Mar 7, 2020 at 10:02 AM Strahil Nikolov 
>> wrote:
>>
>>> On March 7, 2020 1:12:58 PM GMT+02:00, Jayme  wrote:
>>> >Thanks again for the info. You’re probably right about the testing
>>> >method.
>>> >Though the reason I’m down this path in the first place is because I’m
>>> >seeing a problem in real world work loads. Many of my vms are used in
>>> >development environments where working with small files is common such
>>> >as
>>> >npm installs working with large node_module folders, ci/cd doing lots
>>> >of
>>> >mixed operations io and compute.
>>> >
>>> >I started testing some of these things by comparing side to side with a
>>> >vm
>>> >using same specs only difference being gluster vs nfs storage. Nfs
>>> >backed
>>> >storage is performing about 3x better real world.
>>> >
>>> >Gluster version is stock that comes with 4.3.7. I haven’t attempted
>>> >updating it outside of official ovirt updates.
>>> >
>>> >I’d like to see if I could improve it to handle my workloads better. I
>>> >also
>>> >understand that replication adds overhead.
>>> >
>>> >I do wonder how much difference in performance there would be with
>>> >replica
>>> >3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
>>> >perhaps not by a considerable difference.
>>> >
>>> >I will check into c states as well
>>> >
>>> >On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov 
>>> >wrote:
>>> >
>>> >> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme 
>>> >wrote:
>>> >> >Strahil,
>>> >> >
>>> >> >Thanks for your suggestions. The config is pretty standard HCI setup
>>> >> >with
>>> >> >cockpit and hosts are oVirt node. XFS was handled by the deployment
>>> >> >automatically. The gluster volumes were optimized for virt store.
>>> >> >
>>> >> >I tried noop on the SSDs, that made zero difference in the tests I
>>> >was
>>> >> >running above. I took a look at the random-io-profile and it looks
>>> >like
>>> >> >it
>>> >> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5
>>> >--
>>> >> >my
>>> >> >hosts alr

[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-08 Thread Jayme
Strahil,

I'm starting to think that my problem could be related to the use of perc
H310 mini raid controllers in my oVirt hosts. The os/boot SSDs are raid
mirror but gluster storage is SSDs in passthrough. I've read that the queue
depth of h310 card is very low and can cause performance issues
especially when used with flash devices.

dd if=/dev/zero of=test4.img bs=512 count=5000 oflag=dsync on one of my
hosts gluster bricks /gluster_bricks/brick_a for example takes 45 seconds
to complete.

I can perform the same operation in ~2 seconds on another server with a
better raid controller, but with the same model ssd.

I might look at seeing how I can swap out the h310's, unfortunately I think
that may require me to wipe the gluster storage drives as with another
controller I believe they'd need to be added as single raid 0 arrays and
would need to be rebuilt to do so.

If I were to take one host down at a time is there a way that I can
re-build the entire server including wiping the gluster disks and add the
host back into the ovirt cluster and rebuild it along with the bricks? How
would you recommend doing such a task if I needed to wipe gluster disks on
each host ?



On Sat, Mar 7, 2020 at 6:24 PM Jayme  wrote:

> No worries at all about the length of the email, the details are highly
> appreciated. You've given me lots to look into and consider.
>
>
>
> On Sat, Mar 7, 2020 at 10:02 AM Strahil Nikolov 
> wrote:
>
>> On March 7, 2020 1:12:58 PM GMT+02:00, Jayme  wrote:
>> >Thanks again for the info. You’re probably right about the testing
>> >method.
>> >Though the reason I’m down this path in the first place is because I’m
>> >seeing a problem in real world work loads. Many of my vms are used in
>> >development environments where working with small files is common such
>> >as
>> >npm installs working with large node_module folders, ci/cd doing lots
>> >of
>> >mixed operations io and compute.
>> >
>> >I started testing some of these things by comparing side to side with a
>> >vm
>> >using same specs only difference being gluster vs nfs storage. Nfs
>> >backed
>> >storage is performing about 3x better real world.
>> >
>> >Gluster version is stock that comes with 4.3.7. I haven’t attempted
>> >updating it outside of official ovirt updates.
>> >
>> >I’d like to see if I could improve it to handle my workloads better. I
>> >also
>> >understand that replication adds overhead.
>> >
>> >I do wonder how much difference in performance there would be with
>> >replica
>> >3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
>> >perhaps not by a considerable difference.
>> >
>> >I will check into c states as well
>> >
>> >On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov 
>> >wrote:
>> >
>> >> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme 
>> >wrote:
>> >> >Strahil,
>> >> >
>> >> >Thanks for your suggestions. The config is pretty standard HCI setup
>> >> >with
>> >> >cockpit and hosts are oVirt node. XFS was handled by the deployment
>> >> >automatically. The gluster volumes were optimized for virt store.
>> >> >
>> >> >I tried noop on the SSDs, that made zero difference in the tests I
>> >was
>> >> >running above. I took a look at the random-io-profile and it looks
>> >like
>> >> >it
>> >> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5
>> >--
>> >> >my
>> >> >hosts already appear to have those sysctl values, and by default are
>> >> >using virtual-host tuned profile.
>> >> >
>> >> >I'm curious what a test like "dd if=/dev/zero of=test2.img bs=512
>> >> >count=1000 oflag=dsync" on one of your VMs would show for results?
>> >> >
>> >> >I haven't done much with gluster profiling but will take a look and
>> >see
>> >> >if
>> >> >I can make sense of it. Otherwise, the setup is pretty stock oVirt
>> >HCI
>> >> >deployment with SSD backed storage and 10Gbe storage network.  I'm
>> >not
>> >> >coming anywhere close to maxing network throughput.
>> >> >
>> >> >The NFS export I was testing was an export from a local server
>> >> >exporting a
>> >> >single SSD (same type as in the oVirt hosts).
>> >> >
>> >> >I might end up switching storage to NFS and d

[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-07 Thread Jayme
No worries at all about the length of the email, the details are highly
appreciated. You've given me lots to look into and consider.



On Sat, Mar 7, 2020 at 10:02 AM Strahil Nikolov 
wrote:

> On March 7, 2020 1:12:58 PM GMT+02:00, Jayme  wrote:
> >Thanks again for the info. You’re probably right about the testing
> >method.
> >Though the reason I’m down this path in the first place is because I’m
> >seeing a problem in real world work loads. Many of my vms are used in
> >development environments where working with small files is common such
> >as
> >npm installs working with large node_module folders, ci/cd doing lots
> >of
> >mixed operations io and compute.
> >
> >I started testing some of these things by comparing side to side with a
> >vm
> >using same specs only difference being gluster vs nfs storage. Nfs
> >backed
> >storage is performing about 3x better real world.
> >
> >Gluster version is stock that comes with 4.3.7. I haven’t attempted
> >updating it outside of official ovirt updates.
> >
> >I’d like to see if I could improve it to handle my workloads better. I
> >also
> >understand that replication adds overhead.
> >
> >I do wonder how much difference in performance there would be with
> >replica
> >3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
> >perhaps not by a considerable difference.
> >
> >I will check into c states as well
> >
> >On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov 
> >wrote:
> >
> >> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme 
> >wrote:
> >> >Strahil,
> >> >
> >> >Thanks for your suggestions. The config is pretty standard HCI setup
> >> >with
> >> >cockpit and hosts are oVirt node. XFS was handled by the deployment
> >> >automatically. The gluster volumes were optimized for virt store.
> >> >
> >> >I tried noop on the SSDs, that made zero difference in the tests I
> >was
> >> >running above. I took a look at the random-io-profile and it looks
> >like
> >> >it
> >> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5
> >--
> >> >my
> >> >hosts already appear to have those sysctl values, and by default are
> >> >using virtual-host tuned profile.
> >> >
> >> >I'm curious what a test like "dd if=/dev/zero of=test2.img bs=512
> >> >count=1000 oflag=dsync" on one of your VMs would show for results?
> >> >
> >> >I haven't done much with gluster profiling but will take a look and
> >see
> >> >if
> >> >I can make sense of it. Otherwise, the setup is pretty stock oVirt
> >HCI
> >> >deployment with SSD backed storage and 10Gbe storage network.  I'm
> >not
> >> >coming anywhere close to maxing network throughput.
> >> >
> >> >The NFS export I was testing was an export from a local server
> >> >exporting a
> >> >single SSD (same type as in the oVirt hosts).
> >> >
> >> >I might end up switching storage to NFS and ditching gluster if
> >> >performance
> >> >is really this much better...
> >> >
> >> >
> >> >On Fri, Mar 6, 2020 at 5:06 PM Strahil Nikolov
> >
> >> >wrote:
> >> >
> >> >> On March 6, 2020 6:02:03 PM GMT+02:00, Jayme 
> >> >wrote:
> >> >> >I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD
> >> >> >disks).
> >> >> >Small file performance inner-vm is pretty terrible compared to a
> >> >> >similar
> >> >> >spec'ed VM using NFS mount (10GBe network, SSD disk)
> >> >> >
> >> >> >VM with gluster storage:
> >> >> >
> >> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >> >> >1000+0 records in
> >> >> >1000+0 records out
> >> >> >512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s
> >> >> >
> >> >> >VM with NFS:
> >> >> >
> >> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >> >> >1000+0 records in
> >> >> >1000+0 records out
> >> >> >512000 bytes (512 kB) copied, 2.20059 s, 233 kB/s
> >> >> >
> >> >> >This is a very big difference, 2 seconds to copy 1000 files on
> >NFS
>

[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-07 Thread Jayme
Thanks again for the info. You’re probably right about the testing method.
Though the reason I’m down this path in the first place is because I’m
seeing a problem in real world work loads. Many of my vms are used in
development environments where working with small files is common such as
npm installs working with large node_module folders, ci/cd doing lots of
mixed operations io and compute.

I started testing some of these things by comparing side to side with a vm
using same specs only difference being gluster vs nfs storage. Nfs backed
storage is performing about 3x better real world.

Gluster version is stock that comes with 4.3.7. I haven’t attempted
updating it outside of official ovirt updates.

I’d like to see if I could improve it to handle my workloads better. I also
understand that replication adds overhead.

I do wonder how much difference in performance there would be with replica
3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
perhaps not by a considerable difference.

I will check into c states as well

On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov 
wrote:

> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme  wrote:
> >Strahil,
> >
> >Thanks for your suggestions. The config is pretty standard HCI setup
> >with
> >cockpit and hosts are oVirt node. XFS was handled by the deployment
> >automatically. The gluster volumes were optimized for virt store.
> >
> >I tried noop on the SSDs, that made zero difference in the tests I was
> >running above. I took a look at the random-io-profile and it looks like
> >it
> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5 --
> >my
> >hosts already appear to have those sysctl values, and by default are
> >using virtual-host tuned profile.
> >
> >I'm curious what a test like "dd if=/dev/zero of=test2.img bs=512
> >count=1000 oflag=dsync" on one of your VMs would show for results?
> >
> >I haven't done much with gluster profiling but will take a look and see
> >if
> >I can make sense of it. Otherwise, the setup is pretty stock oVirt HCI
> >deployment with SSD backed storage and 10Gbe storage network.  I'm not
> >coming anywhere close to maxing network throughput.
> >
> >The NFS export I was testing was an export from a local server
> >exporting a
> >single SSD (same type as in the oVirt hosts).
> >
> >I might end up switching storage to NFS and ditching gluster if
> >performance
> >is really this much better...
> >
> >
> >On Fri, Mar 6, 2020 at 5:06 PM Strahil Nikolov 
> >wrote:
> >
> >> On March 6, 2020 6:02:03 PM GMT+02:00, Jayme 
> >wrote:
> >> >I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD
> >> >disks).
> >> >Small file performance inner-vm is pretty terrible compared to a
> >> >similar
> >> >spec'ed VM using NFS mount (10GBe network, SSD disk)
> >> >
> >> >VM with gluster storage:
> >> >
> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >> >1000+0 records in
> >> >1000+0 records out
> >> >512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s
> >> >
> >> >VM with NFS:
> >> >
> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >> >1000+0 records in
> >> >1000+0 records out
> >> >512000 bytes (512 kB) copied, 2.20059 s, 233 kB/s
> >> >
> >> >This is a very big difference, 2 seconds to copy 1000 files on NFS
> >VM
> >> >VS 53
> >> >seconds on the other.
> >> >
> >> >Aside from enabling libgfapi is there anything I can tune on the
> >> >gluster or
> >> >VM side to improve small file performance? I have seen some guides
> >by
> >> >Redhat in regards to small file performance but I'm not sure what/if
> >> >any of
> >> >it applies to oVirt's implementation of gluster in HCI.
> >>
> >> You can use the rhgs-random-io tuned  profile from
> >>
> >
> ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.4.2.0-1.el7rhgs.src.rpm
> >> and try with that on your hosts.
> >> In my case, I have  modified  it so it's a mixture between
> >rhgs-random-io
> >> and the profile for Virtualization Host.
> >>
> >> Also,ensure that your bricks are  using XFS with relatime/noatime
> >mount
> >> option and your scheduler for the SSDs is either  'noop' or 'none'
> >.The
> >> default  I/O scheduler for RHEL7 is deadline which is giving
&g

[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-06 Thread Jayme
Strahil,

Thanks for your suggestions. The config is pretty standard HCI setup with
cockpit and hosts are oVirt node. XFS was handled by the deployment
automatically. The gluster volumes were optimized for virt store.

I tried noop on the SSDs, that made zero difference in the tests I was
running above. I took a look at the random-io-profile and it looks like it
really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5 -- my
hosts already appear to have those sysctl values, and by default are
using virtual-host tuned profile.

I'm curious what a test like "dd if=/dev/zero of=test2.img bs=512
count=1000 oflag=dsync" on one of your VMs would show for results?

I haven't done much with gluster profiling but will take a look and see if
I can make sense of it. Otherwise, the setup is pretty stock oVirt HCI
deployment with SSD backed storage and 10Gbe storage network.  I'm not
coming anywhere close to maxing network throughput.

The NFS export I was testing was an export from a local server exporting a
single SSD (same type as in the oVirt hosts).

I might end up switching storage to NFS and ditching gluster if performance
is really this much better...


On Fri, Mar 6, 2020 at 5:06 PM Strahil Nikolov 
wrote:

> On March 6, 2020 6:02:03 PM GMT+02:00, Jayme  wrote:
> >I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD
> >disks).
> >Small file performance inner-vm is pretty terrible compared to a
> >similar
> >spec'ed VM using NFS mount (10GBe network, SSD disk)
> >
> >VM with gluster storage:
> >
> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >1000+0 records in
> >1000+0 records out
> >512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s
> >
> >VM with NFS:
> >
> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >1000+0 records in
> >1000+0 records out
> >512000 bytes (512 kB) copied, 2.20059 s, 233 kB/s
> >
> >This is a very big difference, 2 seconds to copy 1000 files on NFS VM
> >VS 53
> >seconds on the other.
> >
> >Aside from enabling libgfapi is there anything I can tune on the
> >gluster or
> >VM side to improve small file performance? I have seen some guides by
> >Redhat in regards to small file performance but I'm not sure what/if
> >any of
> >it applies to oVirt's implementation of gluster in HCI.
>
> You can use the rhgs-random-io tuned  profile from
> ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.4.2.0-1.el7rhgs.src.rpm
> and try with that on your hosts.
> In my case, I have  modified  it so it's a mixture between rhgs-random-io
> and the profile for Virtualization Host.
>
> Also,ensure that your bricks are  using XFS with relatime/noatime mount
> option and your scheduler for the SSDs is either  'noop' or 'none' .The
> default  I/O scheduler for RHEL7 is deadline which is giving preference to
> reads and  your  workload  is  definitely 'write'.
>
> Ensure that the virt settings are  enabled for your gluster volumes:
> 'gluster volume set  group virt'
>
> Also, are you running  on fully allocated disks for the VM or you started
> thin ?
> I'm asking as creation of new shards  at gluster  level is a slow task.
>
> Have you checked  gluster  profiling the volume?  It can clarify what is
> going on.
>
>
> Also are you comparing apples to apples ?
> For example, 1 ssd  mounted  and exported  as NFS and a replica 3 volume
> of the same type of ssd ? If not,  the NFS can have more iops due to
> multiple disks behind it, while Gluster has to write the same thing on all
> nodes.
>
> Best Regards,
> Strahil Nikolov
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2PR4JHLJFJQY3MMLKDBKSKALT2JX7KT5/


[ovirt-users] What if anything can be done to improve small file performance with gluster?

2020-03-06 Thread Jayme
I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD disks).
Small file performance inner-vm is pretty terrible compared to a similar
spec'ed VM using NFS mount (10GBe network, SSD disk)

VM with gluster storage:

# dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s

VM with NFS:

# dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 2.20059 s, 233 kB/s

This is a very big difference, 2 seconds to copy 1000 files on NFS VM VS 53
seconds on the other.

Aside from enabling libgfapi is there anything I can tune on the gluster or
VM side to improve small file performance? I have seen some guides by
Redhat in regards to small file performance but I'm not sure what/if any of
it applies to oVirt's implementation of gluster in HCI.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TIVAULOZPYFTQ3AYRX6KSRW6KUQKWXJ5/


[ovirt-users] Re: Hyperconverged setup questions

2020-02-29 Thread Jayme
I think you may be misinterpreting hci. Even though the hosts are used for
storage it’s still not technically local storage as the hosts are acting as
clients and are mounting the gluster storage.. so storage is still over
network.

You can get better gluster performance if you switch To libgfapi however
it’s not a default option and has some known issues. I also read in a it
report recently that redhat doesn’t plan to implement it because the
performance improvements aren’t great, although many ovirt users on this
group seem to have seen a performance boost after switching to libgfapi.

If io performance is your main concern I’m not sure if gluster is the
answer. There are likely some others on this group who have more real world
experience with various environments and could give you more comparisons.

On Sat, Feb 29, 2020 at 8:22 AM Vrgotic, Marko 
wrote:

> Hi Strahil , Leo and Jayme,
>
>
>
> This thread is getting more and more useful, great.
>
>
>
> Atm, I have 15 nodes cluster with shared Storage from Netapp. The storage
> network is (NFS4.1) on 20GB LACP, separated from control.
>
> Performance is generally great, except in several test cases when using
> "send next data after write confirm". This situation does not care about
> speed of network, kernel buffers or any other buffers, but only about
> storage server speed, and then we hit the speed issue.
>
>
>
> The main reason why I am asking for HCI, is to get as close as possible to
> Local Storage speed with multiple hosts in same cluster.
>
> The idea is to add HCI to current setup, as second cluster, utilizing CPU
> RAM and LocalStorage of joined nodes.
>
> --Is this actually a direction which will get me to the wanted result, or
> am I misunderstanding purpose of HCI?
>
>
>
> I understand that the HCI with SHE requires replica2+arbiter or replica3,
> but that is not my situation. I wish only to add HCI for reasons above.
>
> --Do I need the distributed-replicated in that case, or I can simply use
> distributed (if still supported) setup?
>
>
>
> Jayme, I do have resources to set this up in a staged environment, and I
> will be happy to share the info, but first I need to find out if I am at
> all moving in right direction.
>
>
>
> Kindly awaiting your reply.
>
>
>
>
>
> -
>
> kind regards/met vriendelijke groeten
>
> Marko Vrgotic
>
> Sr. System Engineer @ System Administration
>
>
>
> ActiveVideo
>
> e: m.vrgo...@activevideo.com
>
> w: www.activevideo.com <https://www.activevideo.com>
>
> ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217
> WJ Hilversum, The Netherlands. The information contained in this message
> may be legally privileged and confidential. It is intended to be read only
> by the individual or entity to whom it is addressed or by their designee.
> If the reader of this message is not the intended recipient, you are on
> notice that any distribution of this message, in any form, is strictly
> prohibited.  If you have received this message in error, please immediately
> notify the sender and/or ActiveVideo Networks, LLC by telephone at +1
> 408.931.9200 and delete or destroy any copy of this message.
>
>
>
>
>
> On 29/02/2020, 11:53, "Strahil Nikolov"  wrote:
>
>
>
> On February 29, 2020 11:19:30 AM GMT+02:00, Jayme 
> wrote:
>
> >I currently have a three host hci in rep 3 (no arbiter). 10gbe network
>
> >and
>
> >ssds making up the bricks. I’ve wondered what the result of adding
>
> >three
>
> >more nodes to expand hci would be. Is there an overall storage
>
> >performance
>
> >increase when gluster is expanded like this?
>
> >
>
> >On Sat, Feb 29, 2020 at 4:26 AM Leo David  wrote:
>
> >
>
> >> Hi,
>
> >> As a first setup, you can go with a 3 nodes HCI and having the data
>
> >volume
>
> >> in a replica 3 setup.
>
> >> Afterwards, if you want to expand HCI ( compute and storage too) you
>
> >can
>
> >> add sets of 3  nodes, and the data volume will automatically become
>
> >> replicated-distributed. Safely, you can add sets of 3 nodes up to 12
>
> >nodes
>
> >> per HCI.
>
> >> You can also add "compute only nodes" and not extending storage too.
>
> >This
>
> >> can be done by adding nodes one by one.
>
> >> As an example, I have an implementation where are 3 hyperconverged
>
> >nodes,
>
> >> they form a replica 3 volume, and later 

[ovirt-users] Re: Hyperconverged setup questions

2020-02-29 Thread Jayme
I currently have a three host hci in rep 3 (no arbiter). 10gbe network and
ssds making up the bricks. I’ve wondered what the result of adding three
more nodes to expand hci would be. Is there an overall storage performance
increase when gluster is expanded like this?

On Sat, Feb 29, 2020 at 4:26 AM Leo David  wrote:

> Hi,
> As a first setup, you can go with a 3 nodes HCI and having the data volume
> in a replica 3 setup.
> Afterwards, if you want to expand HCI ( compute and storage too) you can
> add sets of 3  nodes, and the data volume will automatically become
> replicated-distributed. Safely, you can add sets of 3 nodes up to 12 nodes
> per HCI.
> You can also add "compute only nodes" and not extending storage too. This
> can be done by adding nodes one by one.
> As an example, I have an implementation where are 3 hyperconverged nodes,
> they form a replica 3 volume, and later i have added the 4th node to the
> cluster which only adds ram and cpu, whilts consuming storage from the
> existing 3 nodes based volume.
> Hope this helps.
> Cheers,
>
> Leo
>
>
> On Fri, Feb 28, 2020, 15:25 Vrgotic, Marko 
> wrote:
>
>> Hi Strahil,
>>
>>
>>
>> I circled back on your reply while ago regarding oVirt Hyperconverged and
>> more than 3 nodes in cluster:
>>
>>
>>
>> “Hi Marko, I guess  you can use distributed-replicated volumes  and
>> oVirt  cluster with host triplets.”
>>
>> Initially I understood that its limited to 3Nodes max per HC cluster, but
>> now reading documentation further
>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Maintenance_and_Upgrading_Resources.html
>> that does not look like it.
>>
>>
>>
>> Would you be so kind to give me an example or clarify what you meant by “*you
>> can use distributed-replicated volumes  and oVirt  cluster with host
>> triplets.*” ?
>>
>>
>>
>> Kindly awaiting your reply.
>>
>>
>>
>>
>>
>> -
>>
>> kind regards/met vriendelijke groeten
>>
>>
>>
>> Marko Vrgotic
>> ActiveVideo
>>
>>
>>
>>
>>
>>
>>
>> *From: *"Vrgotic, Marko" 
>> *Date: *Friday, 11 October 2019 at 08:49
>> *To: *Strahil 
>> *Cc: *users 
>> *Subject: *Re: [ovirt-users] Hyperconverged setup questions
>>
>>
>>
>> Hi Strahil,
>>
>>
>>
>> Thank you.
>>
>> One maybe stupid question, but significant to me:
>>
>> Considering i haven’t been playing before with hyperconverged setup in
>> oVirt, is this something i can do from ui cockpit or does it require me
>> first setup an Glusterfs on the Hosts before doing anything via oVirt API
>> or Web interface?
>>
>>
>>
>> Kindly awaiting your reply.
>>
>>
>>
>> Marko
>>
>>
>>
>> Sent from my iPhone
>>
>>
>>
>> On 11 Oct 2019, at 06:14, Strahil  wrote:
>>
>> Hi Marko,
>>
>> I guess  you can use distributed-replicated volumes  and oVirt  cluster
>> with host triplets.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Oct 10, 2019 15:30, "Vrgotic, Marko" 
>> wrote:
>>
>> Dear oVirt,
>>
>>
>>
>> Is it possible to add oVirt 3Hosts/Gluster hyperconverged cluster to
>> existing oVirt setup? I need this to achieve Local storage performance, but
>> still have pool of Hypevisors available.
>>
>> Is it possible to have more than 3Hosts in Hyperconverged setup?
>>
>>
>>
>> I have currently 1Shared Cluster (NFS based storage, where also SHE is
>> hosted) and 2Local Storage clusters.
>>
>>
>>
>> oVirt current version running is 4.3.4.
>>
>>
>>
>> Kindly awaiting your reply.
>>
>>
>>
>>
>>
>> — — —
>> Met vriendelijke groet / Kind regards,
>>
>> *Marko Vrgotic*
>>
>> *ActiveVideo*
>>
>>
>>
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UH2FDN57V2TOQXD36UQXVTVCTB37O4OE/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJ5IAHCMNU3KSYUR3MCD2NNJTDEIHRNX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/APDEYVK7HLTMCVEWPQ26NECDQ2SMERCI/


[ovirt-users] Re: Hyperconverged setup questions

2020-02-28 Thread Jayme
Marko,

>From my understanding, you can have more than 3 hosts in a HCI cluster but
to expand HCI you need to add hosts in multiples of three.  I.e. go from 3
hosts to 6 or 9 etc.

You can still add hosts into the cluster as compute only hosts though.  So
you could have 3 hosts with gluster and a 4th that is just compute.

On Fri, Feb 28, 2020 at 9:24 AM Vrgotic, Marko 
wrote:

> Hi Strahil,
>
>
>
> I circled back on your reply while ago regarding oVirt Hyperconverged and
> more than 3 nodes in cluster:
>
>
>
> “Hi Marko, I guess  you can use distributed-replicated volumes  and
> oVirt  cluster with host triplets.”
>
> Initially I understood that its limited to 3Nodes max per HC cluster, but
> now reading documentation further
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Maintenance_and_Upgrading_Resources.html
> that does not look like it.
>
>
>
> Would you be so kind to give me an example or clarify what you meant by “*you
> can use distributed-replicated volumes  and oVirt  cluster with host
> triplets.*” ?
>
>
>
> Kindly awaiting your reply.
>
>
>
>
>
> -
>
> kind regards/met vriendelijke groeten
>
>
>
> Marko Vrgotic
> ActiveVideo
>
>
>
>
>
>
>
> *From: *"Vrgotic, Marko" 
> *Date: *Friday, 11 October 2019 at 08:49
> *To: *Strahil 
> *Cc: *users 
> *Subject: *Re: [ovirt-users] Hyperconverged setup questions
>
>
>
> Hi Strahil,
>
>
>
> Thank you.
>
> One maybe stupid question, but significant to me:
>
> Considering i haven’t been playing before with hyperconverged setup in
> oVirt, is this something i can do from ui cockpit or does it require me
> first setup an Glusterfs on the Hosts before doing anything via oVirt API
> or Web interface?
>
>
>
> Kindly awaiting your reply.
>
>
>
> Marko
>
>
>
> Sent from my iPhone
>
>
>
> On 11 Oct 2019, at 06:14, Strahil  wrote:
>
> Hi Marko,
>
> I guess  you can use distributed-replicated volumes  and oVirt  cluster
> with host triplets.
>
> Best Regards,
> Strahil Nikolov
>
> On Oct 10, 2019 15:30, "Vrgotic, Marko"  wrote:
>
> Dear oVirt,
>
>
>
> Is it possible to add oVirt 3Hosts/Gluster hyperconverged cluster to
> existing oVirt setup? I need this to achieve Local storage performance, but
> still have pool of Hypevisors available.
>
> Is it possible to have more than 3Hosts in Hyperconverged setup?
>
>
>
> I have currently 1Shared Cluster (NFS based storage, where also SHE is
> hosted) and 2Local Storage clusters.
>
>
>
> oVirt current version running is 4.3.4.
>
>
>
> Kindly awaiting your reply.
>
>
>
>
>
> — — —
> Met vriendelijke groet / Kind regards,
>
> *Marko Vrgotic*
>
> *ActiveVideo*
>
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UH2FDN57V2TOQXD36UQXVTVCTB37O4OE/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QATUNC7HC6ZV2CNCUVZVKYX2MEDUNQ7W/


[ovirt-users] Re: OVA import fails always

2020-02-27 Thread Jayme
If the problem is with the upload process specifically it’s likely that you
do not have the ovirt engine certificate installed in your browser.

On Thu, Feb 27, 2020 at 11:34 PM Juan Pablo Lorier 
wrote:

> Hi,
>
> I'm running 4.3.8.2-1.el7 (just updated engine to see if it helps) and I
> haven't been able to import vms in OVA format, I've tried many appliances
> downloaded from the web but couldn't get them to work.
>
> Any hints?
>
> Regards
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRCE36GYQIOCXYR6K3KWUJA6R4ODWU56/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VWU7EM2ZFX6STEJU67WQTIRJZWBBVWZG/


[ovirt-users] Re: oVirt Python SDK and monitor export as OVA task

2020-02-27 Thread Jayme
Gianluca,

This is not a direct solution to your problem, but for my project here:
https://github.com/silverorange/ovirt_ansible_backup was recently updated
to make use the ovirt_event_info Ansible module to determine the state of
the export. I'm not sure how to do the same in python directly, but I'm
sure it'd be possible. Unfortunately my solution is a full VM backup
including all disks which I understand is not your goal here.

I have used vProtect in the past to backup VMs and it has the ability to
specify which VM disks to include in the backup. That may be an option for
you to explore.

- Jayme

On Thu, Feb 27, 2020 at 11:33 AM Gianluca Cecchi 
wrote:

> Hello,
> sometimes I have environments (typically with Oracle RDBMS on virtual
> machines) where there is one boot disk and one (often big, such as 500Gb or
> more) data disk.
> The data disk has already its application backup (typically RMAN) and I
> want to complement it with a backup of only the boot disk of the operating
> system and VM configuration.
> I'm almost done completing a script using oVirt Python SDK and sharing for
> comments.
> I could be wrong, but I didn't find any ansible module  to do this with
> Ansible playbooks: I can only save all the VM, that in my case wouldn't be
> necessary and instead time and storage wasting.
> The basic steps are:
> - make a snapshot of the target vm, composed only by the boot disk
> - clone the snapshot
> - export the clone as OVA
> - delete clone
> - delete snapshot
>
> There are some things to discuss about, probably when I will share the
> overall job (such as the name to give to the boot disk, if not using the
> bootable flag, the way to import the OVA in case of actual need, where you
> will have to comment out some fs entries related to missing disks, ecc.).
> The only thing missing is how to monitor the export to ova task that, as
> with Ansible, it completes almost immediately, while the export is actually
> altready running.
> I need to monitor it, so only at its real end I can run the last 2 steps
> of the above list of tasks.
>
> Can you give me any hint? Not found very much in guide or docs on the web.
> I'm currently using a sleep because my boot disk is about 20Gb in size and
> I know that in less than 2 minutes it normally completes.
>
> The export as ova is very similar to what found in the examples and they
> don't contain a monitor for it but only the connection.close() call:
>
> cloned_vm_service.export_to_path_on_host(
> host=types.Host(id=host.id),
> directory = export_as_ova_dir,
> filename = cloned_vm.name + '.ova'
> )
>
> Thanks in advance,
> Gianluca
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/66AFZIU4GTQRJSZR5F5P2OR6ZS6IBDP7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RL3BU5JXG4IN4XRJU3XFRMVGQFK7VIFY/


[ovirt-users] Re: Ovirt API and CLI

2020-02-27 Thread Jayme
Echoing what others have said. Ansible is your best option here.

On Thu, Feb 27, 2020 at 7:22 AM Nathanaël Blanchet  wrote:

>
> Le 27/02/2020 à 11:00, Yedidyah Bar David a écrit :
>
> On Thu, Feb 27, 2020 at 11:53 AM Eugène Ngontang  
>  wrote:
>
> Yes Ansible ovirt_vms module is useful, I use it for provisioning/deployment, 
> but once my VM created, I'd like to administrate/interact with them, I don't 
> think I should write playbooks for that.
>
> Why not? You're the next devops :)
>
> I was used to use ovirt-shell (removed from 4.4), and instead of it I
> control now all my vms with ansible playbooks:
>
>- consultation with ovirt-*_inf with appropriate filters (combine ,
>dict2items) and conditions (when, until)
>- interaction with other modules (with present/absent statement for
>all parameters)
>
> I precise I am not a developer but once I took the habit with a proper
> environment (venv, IDE, loops, structured playbook and roles, dict struct,
> etc..), I was able do what I want, or rather what the API let me do.
>
> Before begining, I should advice you to take the time to study the
> structure of the output of the registered variable
>
> Here is a piece of my commonly used playbooks to check status of wanted
> vms:
> - name: template ovirt pour tester les modules
> hosts: localhost
> connection: local
> tasks:
> - block:
> - include: ovirt_auth.yaml
> tags: auth,change
> - name: vm facts
> ovirt_vm_info:
> auth: "{{ ovirt_auth }}"
> pattern: "name=vm5 or name=vm8"
> register: vm_info
> - debug: var=vm_info.ovirt_vms
> # msg: "{{vm_info.ovirt_vms | map(attribute='status')|list}}"
> - name: "Génération d'un dictionnaire avec combine"
> set_fact:
> vm_status: "{{ vm_status|default({})|combine({item.name: item.status}) }}"
> loop: "{{vm_info.ovirt_vms}}"
> when: item.status == "up"
> - debug:
> msg: "{{vm_status}}"
> always:
> - include: ovirt_auth_revoke.yaml
> tags: auth,change
>
> Good luck!
>
> This is up to you, of course.
>
> For a project that uses heavily the ansible modules, see
> ovirt-ansible-hosted-engine-setup.
>
> For one that uses the python SDK, see ovirt-system-tests. The SDK
> itself also has a very useful collection of examples.
>
>
> But I'll find a solution.
>
> Good luck and best regards,
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> SIRE227 avenue Professeur-Jean-Louis-Viala 
> 
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LDCRW5YXHEMEY77XTHQKV4CAHHUKF43E/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ICVGFR5LJH7HFYY6S7TMS3GP4GPIPQMD/


[ovirt-users] oVirt ansible backup improvements

2020-02-25 Thread Jayme
If anyone has been following along, I had previously shared a blog post and
GitHub repo regarding my unofficial solution for backing up oVirt VMs using
Ansible.

Martin Necas reached out to me and we collaborated on some great
improvements. Namely, it is now possible to run the playbook from any host
without requiring direct access to storage (which I was previously using
for export status verification). There were several other improvements and
cleanups made as well.

The changes have been merged in and the READMME updated, you can find the
project here: https://github.com/silverorange/ovirt_ansible_backup

Big thanks to Martin for helping out. Very much appreciated!

- Jayme
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JNSY6GYNS6LPNUJXERUO2EOG5F3P7B2M/


[ovirt-users] Re: I wrote an article on using Ansible to backup oVirt VMs

2020-02-18 Thread Jayme
Gianluca,

Thank you so much for the great feedback, it is very much appreciated! I
too have to carve out some time to test some of these ideas more
thoroughly, but I wanted to offer some of my initial thoughts anyway.

My goal is for the playbook(s) to be as simple as possible with as little
configuration as possible. Ideally I'd love to see the playbooks able to be
run from any host without requiring a connection to the engine database or
needing to have access to storage in order to verify export status.

1. Vault: I am aware of this and have seen this method used in other
oVirt/RHEV documentation. The reason I left it out is because I want to run
the playbook on cron without being prompted for a password. This could
potentially be solved by specifying the vault password as an environment
variable in cron, but in the end the password still needs to be provided
somewhere for the playbook to work hands-off. I suppose it's a matter of
which is the most secure and recommended way to do so.  Open to suggestions
here.

2. Blocks: I am aware of the use of blocks in Ansible but don't personally
have much direct experience using them. Your idea to use a block for SSO
token seems reasonable and likely should be implemented. I need to test
that out.

3. Export Timing: I like your solution for probing the DB for export status
and I'd like to spend some more time looking at that. I wonder if it's
perhaps a bit too complex and if there may be an easier way without
directly interacting with the engine database. One idea I had which I think
could work would be by use of the
https://docs.ansible.com/ansible/latest/modules/ovirt_event_info_module.html#ovirt-event-info-module
module.
I believe this module could be used in a wait_for until the message "Vm X
was exported successfully as a Virtual Appliance to path..." appears in the
VM's event messages. To make sure we don't get prior events we could
register the current event index ID in a variable then use the "from_"
parameter to only search for new events. I do think something like this
could work but I haven't had enough time to thoroughly test it and I'm not
sure if it's the best possible solution. There may be an even easier way to
determine the export status using existing ovirt Ansible modules but I have
not found one yet. What are you thoughts on this method?

I'd also be interested to hear if you have any thoughts or opinions on ways
to improve backup retention policy to make it more versatile.

Thanks again for your feedback!

- Jayme



On Tue, Feb 18, 2020 at 8:15 AM Gianluca Cecchi 
wrote:

> On Mon, Feb 10, 2020 at 5:01 PM Jayme  wrote:
>
>> I've been part of this mailing list for a while now and have received a
>> lot of great advice and help on various subjects. I read the list daily and
>> one thing I've noticed is that many users are curious about backup options
>> for oVirt (myself included). I wanted to share with the community a
>> solution I've come up with to easily backup multiple running oVirt VMs to
>> OVA format using some basic Ansible playbooks. I've put together a blog
>> post detailing the process which also includes links to a Github repo
>> containing the playbooks here:
>> https://blog.silverorange.com/backing-up-ovirt-vms-with-ansible-4c2fca8b3b43
>>
>> Any feedback, suggestions or questions are welcome. I hope this
>> information is helpful.
>>
>> Thanks!
>>
>> - Jayme
>>
>>
> Hi Jayme,
> sorry in advance for the long mail, where I try to give details; I don't
> know your Ansible experience.
> A very nice and clean article indeed, with useful details (apart from text
> not justified: I prefer it but YMMV) and pretty fair with vProtect work and
> also pros and cons of their solution.
> I met Pawel Maczka from vProtect during oVirt Summit last year and I was
> able to appreciate his kindness and skill and efforts in integrating with
> oVirt/RHV.
>
> That said, I have some suggestions for you. In the next days I could work
> on a similar need for a customer, so it will be nice to share efforts and
> hopefully results... ;-)
> This week I have not much time but if you can elaborate and test what
> below, we can share.
>
> 1) engine parameters
> you could use ansible vault to encrypt credential files, to have better
> security and so you can disclose the playbook files without having to care
> abut sensitive information
> In my case I put username, password, ovirt mgr fqdn, ovirt ca file all in
> a file and then encrypt it (and also engine database ones, see below).
> Then I create a securely protected vault file named "vault_file" where I
> store the vault password and then I recall the playbook with:
>
> ansible-playbook  --vault-password-file=vault_file backup_ovirt_vms.yml
>
> Alternatively you are prom

[ovirt-users] Re: backup

2020-02-17 Thread Jayme
Hello,

I have not used this script myself so I don't have a resolution for you,
however I recently wrote an article regarding a simple method to backup
oVirt VMs using ansible without the need of any complicated software or
proxy VMs involved. Here is the link to the article if it's helpful to you:
https://blog.silverorange.com/backing-up-ovirt-vms-with-ansible-4c2fca8b3b43

On Mon, Feb 17, 2020 at 2:11 PM Nazan CENGİZ 
wrote:

> Hi all,
>
> I am trying https://github.com/vacosta94/VirtBKP.
>
> ovirt version:4.3.5
>
> my config file;
>
> [bkp]
> url = https://xxx/ovirt-engine/api
> user= admin@internal
> password= yyy
> ca_file = /opt/VirtBKP/ca.crt
> bkpvm   = VirtBKM
> bckdir  = /mnt/backup
>
> [restore]
> url = https:/xxx/ovirt-engine/api
> user= admin@internal
> password= yyy
> ca_file = ca.crt
> storage = hosted_storage(storage domain name for new vm???)
> proxy   = xxx(engine FQDN)
> proxyport   = 54323
>
> Fail on below;
>
>
> [root@virtbkp VirtBKP]# /opt/VirtBKP/backup_vm.py default.conf Bacchus
> [OK] Connection to oVIrt API success
> https://ovirtengine2.5ghvl.local/ovirt-engi
>
>ne/api
> [INFO] Trying to create snapshot of VM:
> 8a95f435-94dd-4a69-aed0-46395bcbd082
> [INFO] Waiting until snapshot creation ends
> [INFO] Waiting until snapshot creation ends
> [OK] Snapshot created
> [INFO] Trying to create a qcow2 file of disk
> aa564596-fd33-4734-8050-0f82130a677b
> [INFO] Attach snap disk to bkpvm
> Traceback (most recent call last):
>   File "/opt/VirtBKP/backup_vm.py", line 6, in 
> b.main()
>   File "/opt/VirtBKP/backup_vm_last.py", line 242, in main
> self.backup(self.vmid,self.snapid,disk_id,self.bkpvm)
>   File "/opt/VirtBKP/backup_vm_last.py", line 210, in backup
> self.attach_disk(bkpvm,disk_id,snapid)
>   File "/opt/VirtBKP/backup_vm_last.py", line 123, in attach_disk
> resp_attach = requests.post(urlattach, data=xmlattach,
> headers=headers, verify=False, auth=(self.user,self.password))
>   File "/usr/lib/python2.7/site-packages/requests/api.py", line 112, in
> post
> return request('post', url, data=data, json=json, **kwargs)
>   File "/usr/lib/python2.7/site-packages/requests/api.py", line 58, in
> request
> return session.request(method=method, url=url, **kwargs)
>   File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 498,
> in request
> prep = self.prepare_request(req)
>   File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 441,
> in prepare_request
> hooks=merge_hooks(request.hooks, self.hooks),
>   File "/usr/lib/python2.7/site-packages/requests/models.py", line 309, in
> prepare
> self.prepare_url(url, params)
>   File "/usr/lib/python2.7/site-packages/requests/models.py", line 377, in
> prepare_url
> raise InvalidURL(*e.args)
> requests.exceptions.InvalidURL: Failed to parse:
> https://ovirtengine2.5ghvl.local/ovirt-engine/api/v3/vms/13d45c7f-7812-4f01-9bd8-a3e8ff91c15b/disks/
>
>
>
>
> 
> Nazan CENGİZ
> AR-GE MÜHENDİSİ
> Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE
> +90 312 219 57 87 +90 312 219 57 97
> YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz
> Koşul ve Şartlar dokümanına tabidir.
> 
> LEGAL NOTICE: This e-mail is subject to the Terms and Conditions
> document which can be accessed with this link.
> 
> Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider
> the environment before printing this email
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDLWEVOS6MEASC5KUDSMYZIKCH7NHVNB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KXUM5D3GEFOSN76JZ2DTCEWCMNP72MSK/


[ovirt-users] Re: Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-12 Thread Jayme
I really wish these bugs would get more attention. I struggle to understand
why this isn't a priority given the performance increase's people are
reporting when switching to libgfapi. No snapshots is a deal breaker for me
unfortunately.

On Wed, Feb 12, 2020 at 12:01 PM Darrell Budic 
wrote:

> Yes. I’m using libgfapi access on gluster 6.7 with overt 4.3.8 just fine,
> but I don’t use snapshots. You can work around the HA issue with DNS and
> backup server entries on the storage domain as well. Worth it to me for the
> performance, YMMV.
>
> On Feb 12, 2020, at 8:04 AM, Jayme  wrote:
>
> From my understanding it's not a default option but many users are still
> using libgfapi successfully. I'm not sure about its status in the latest
> 4.3.8 release but I know it is/was working for people in previous versions.
> The libgfapi bugs affect HA and snapshots (on 3 way replica HCI) but it
> should still be working otherwise, unless like I said something changed in
> more recent releases of oVirt.
>
> On Wed, Feb 12, 2020 at 9:43 AM Guillaume Pavese <
> guillaume.pav...@interactiv-group.com> wrote:
>
>> Libgfapi is not supported because of an old bug in qemu. That qemu bug is
>> slowly getting fixed, but the bugs about Libgfapi support in ovirt have
>> since been closed as WONTFIX and DEFERRED
>>
>> See :
>> https://bugzilla.redhat.com/show_bug.cgi?id=1465810
>> https://bugzilla.redhat.com/show_bug.cgi?id=1484660
>> <https://bugzilla.redhat.com/show_bug.cgi?id=1484227> : "No plans to
>> enable libgfapi in RHHI-V for now. Closing this bug"
>> https://bugzilla.redhat.com/show_bug.cgi?id=1484227 : "No plans to
>> enable libgfapi in RHHI-V for now. Closing this bug"
>> https://bugzilla.redhat.com/show_bug.cgi?id=1633642 : "Closing this as
>> no action taken from long back.Please reopen if required."
>>
>> Would be nice if someone could reopen the closed bugs so this feature
>> doesn't get forgotten
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>>
>>
>> On Tue, Feb 11, 2020 at 9:58 AM Stephen Panicho 
>> wrote:
>>
>>> I used the cockpit-based hc setup and "option rpc-auth-allow-insecure"
>>> is absent from /etc/glusterfs/glusterd.vol.
>>>
>>> I'm going to redo the cluster this week and report back. Thanks for the
>>> tip!
>>>
>>> On Mon, Feb 10, 2020 at 6:01 PM Darrell Budic 
>>> wrote:
>>>
>>>> The hosts will still mount the volume via FUSE, but you might double
>>>> check you set the storage up as Gluster and not NFS.
>>>>
>>>> Then gluster used to need some config in glusterd.vol to set
>>>>
>>>> option rpc-auth-allow-insecure on
>>>>
>>>> I’m not sure if that got added to a hyper converged setup or not, but
>>>> I’d check it.
>>>>
>>>> On Feb 10, 2020, at 4:41 PM, Stephen Panicho 
>>>> wrote:
>>>>
>>>> No, this was a relatively new cluster-- only a couple days old. Just a
>>>> handful of VMs including the engine.
>>>>
>>>> On Mon, Feb 10, 2020 at 5:26 PM Jayme  wrote:
>>>>
>>>>> Curious do the vms have active snapshots?
>>>>>
>>>>> On Mon, Feb 10, 2020 at 5:59 PM  wrote:
>>>>>
>>>>>> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster
>>>>>> running on CentOS 7.7 hosts. I was investigating poor Gluster performance
>>>>>> and heard about libgfapi, so I thought I'd give it a shot. Looking 
>>>>>> through
>>>>>> the documentation, followed by lots of threads and BZ reports, I've done
>>>>>> the following to enable it:
>>>>>>
>>>>>> First, I shut down all VMs except the engine. Then...
>>>>>>
>>>>>> On the hosts:
>>>>>> 1. setsebool -P virt_use_glusterfs on
>>>>>> 2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
>>>>>>
>>>>>> On the engine VM:
>>>>>> 1. engine-config -s LibgfApiSupported=true --cver=4.3
>>>>>> 2. systemctl restart ovirt-engine
>>>>>>
>>>>>> VMs now fail to launch. Am I doing this correctly? I should also note
>>>>>> that the hosts still have the Gluster domain mounted via FUSE.
>>>>>>
>>>>>> Here's a relevant bit from engine.log:
>>>>>>
>>>>>> 2020-02-06

[ovirt-users] Re: Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-12 Thread Jayme
>From my understanding it's not a default option but many users are still
using libgfapi successfully. I'm not sure about its status in the latest
4.3.8 release but I know it is/was working for people in previous versions.
The libgfapi bugs affect HA and snapshots (on 3 way replica HCI) but it
should still be working otherwise, unless like I said something changed in
more recent releases of oVirt.

On Wed, Feb 12, 2020 at 9:43 AM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> Libgfapi is not supported because of an old bug in qemu. That qemu bug is
> slowly getting fixed, but the bugs about Libgfapi support in ovirt have
> since been closed as WONTFIX and DEFERRED
>
> See :
> https://bugzilla.redhat.com/show_bug.cgi?id=1465810
> https://bugzilla.redhat.com/show_bug.cgi?id=1484660
> <https://bugzilla.redhat.com/show_bug.cgi?id=1484227> : "No plans to
> enable libgfapi in RHHI-V for now. Closing this bug"
> https://bugzilla.redhat.com/show_bug.cgi?id=1484227 : "No plans to enable
> libgfapi in RHHI-V for now. Closing this bug"
> https://bugzilla.redhat.com/show_bug.cgi?id=1633642 : "Closing this as no
> action taken from long back.Please reopen if required."
>
> Would be nice if someone could reopen the closed bugs so this feature
> doesn't get forgotten
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Tue, Feb 11, 2020 at 9:58 AM Stephen Panicho 
> wrote:
>
>> I used the cockpit-based hc setup and "option rpc-auth-allow-insecure"
>> is absent from /etc/glusterfs/glusterd.vol.
>>
>> I'm going to redo the cluster this week and report back. Thanks for the
>> tip!
>>
>> On Mon, Feb 10, 2020 at 6:01 PM Darrell Budic 
>> wrote:
>>
>>> The hosts will still mount the volume via FUSE, but you might double
>>> check you set the storage up as Gluster and not NFS.
>>>
>>> Then gluster used to need some config in glusterd.vol to set
>>>
>>> option rpc-auth-allow-insecure on
>>>
>>> I’m not sure if that got added to a hyper converged setup or not, but
>>> I’d check it.
>>>
>>> On Feb 10, 2020, at 4:41 PM, Stephen Panicho 
>>> wrote:
>>>
>>> No, this was a relatively new cluster-- only a couple days old. Just a
>>> handful of VMs including the engine.
>>>
>>> On Mon, Feb 10, 2020 at 5:26 PM Jayme  wrote:
>>>
>>>> Curious do the vms have active snapshots?
>>>>
>>>> On Mon, Feb 10, 2020 at 5:59 PM  wrote:
>>>>
>>>>> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster running
>>>>> on CentOS 7.7 hosts. I was investigating poor Gluster performance and 
>>>>> heard
>>>>> about libgfapi, so I thought I'd give it a shot. Looking through the
>>>>> documentation, followed by lots of threads and BZ reports, I've done the
>>>>> following to enable it:
>>>>>
>>>>> First, I shut down all VMs except the engine. Then...
>>>>>
>>>>> On the hosts:
>>>>> 1. setsebool -P virt_use_glusterfs on
>>>>> 2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
>>>>>
>>>>> On the engine VM:
>>>>> 1. engine-config -s LibgfApiSupported=true --cver=4.3
>>>>> 2. systemctl restart ovirt-engine
>>>>>
>>>>> VMs now fail to launch. Am I doing this correctly? I should also note
>>>>> that the hosts still have the Gluster domain mounted via FUSE.
>>>>>
>>>>> Here's a relevant bit from engine.log:
>>>>>
>>>>> 2020-02-06T16:38:32.573511Z qemu-kvm: -drive file=gluster://
>>>>> node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native:
>>>>> Could not read qcow2 header: Invalid argument.
>>>>>
>>>>> The full engine.log from one of the attempts:
>>>>>
>>>>> 2020-02-06 16:38:24,909Z INFO
>>>>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (ForkJoinPool-1-worker-12) [] add VM
>>>>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
>>>>> 2020-02-06 16:38:25,010Z ERROR
>>>>> [org.o

[ovirt-users] Re: Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-10 Thread Jayme
Curious do the vms have active snapshots?

On Mon, Feb 10, 2020 at 5:59 PM  wrote:

> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster running on
> CentOS 7.7 hosts. I was investigating poor Gluster performance and heard
> about libgfapi, so I thought I'd give it a shot. Looking through the
> documentation, followed by lots of threads and BZ reports, I've done the
> following to enable it:
>
> First, I shut down all VMs except the engine. Then...
>
> On the hosts:
> 1. setsebool -P virt_use_glusterfs on
> 2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
>
> On the engine VM:
> 1. engine-config -s LibgfApiSupported=true --cver=4.3
> 2. systemctl restart ovirt-engine
>
> VMs now fail to launch. Am I doing this correctly? I should also note that
> the hosts still have the Gluster domain mounted via FUSE.
>
> Here's a relevant bit from engine.log:
>
> 2020-02-06T16:38:32.573511Z qemu-kvm: -drive file=gluster://
> node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native:
> Could not read qcow2 header: Invalid argument.
>
> The full engine.log from one of the attempts:
>
> 2020-02-06 16:38:24,909Z INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (ForkJoinPool-1-worker-12) [] add VM
> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
> 2020-02-06 16:38:25,010Z ERROR
> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
> (ForkJoinPool-1-worker-12) [] Rerun VM
> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'. Called from VDS '
> node2.ovirt.trashnet.xyz'
> 2020-02-06 16:38:25,091Z WARN
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-216) [] EVENT_ID:
> USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM yumcache on Host
> node2.ovirt.trashnet.xyz.
> 2020-02-06 16:38:25,166Z INFO  [org.ovirt.engine.core.bll.RunVmCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[df9dbac4-35c0-40ee-acd4-a1cfc959aa8b=VM]',
> sharedLocks=''}'
> 2020-02-06 16:38:25,179Z INFO
> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
> IsVmDuringInitiatingVDSCommand(
> IsVmDuringInitiatingVDSCommandParameters:{vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'}),
> log id: 2107f52a
> 2020-02-06 16:38:25,181Z INFO
> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH,
> IsVmDuringInitiatingVDSCommand, return: false, log id: 2107f52a
> 2020-02-06 16:38:25,298Z INFO  [org.ovirt.engine.core.bll.RunVmCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] Running command:
> RunVmCommand internal: false. Entities affected :  ID:
> df9dbac4-35c0-40ee-acd4-a1cfc959aa8b Type: VMAction group RUN_VM with role
> type USER
> 2020-02-06 16:38:25,313Z INFO
> [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils]
> (EE-ManagedThreadFactory-engine-Thread-216) [] Emulated machine
> 'pc-q35-rhel7.6.0' which is different than that of the cluster is set for
> 'yumcache'(df9dbac4-35c0-40ee-acd4-a1cfc959aa8b)
> 2020-02-06 16:38:25,382Z INFO
> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
> UpdateVmDynamicDataVDSCommand(
> UpdateVmDynamicDataVDSCommandParameters:{hostId='null',
> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b',
> vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@9774a64'}),
> log id: 4a83911f
> 2020-02-06 16:38:25,417Z INFO
> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH,
> UpdateVmDynamicDataVDSCommand, return: , log id: 4a83911f
> 2020-02-06 16:38:25,418Z INFO
> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] START, CreateVDSCommand(
> CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452',
> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id:
> 5e07ba66
> 2020-02-06 16:38:25,420Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
> CreateBrokerVDSCommand(HostName = node1.ovirt.trashnet.xyz,
> CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452',
> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id:
> 1bfa03c4
> 2020-02-06 16:38:25,424Z INFO
> [org.ovirt.engine.core.vdsbroker.builder.vminfo.VmInfoBuildUtils]
> (EE-ManagedThreadFactory-engine-Thread-216) [] Kernel FIPS - Guid:
> c3465ca2-395e-4c0c-b72e-b5b7153df452 fips: false
> 2020-02-06 16:38:25,435Z INFO
> 

[ovirt-users] I wrote an article on using Ansible to backup oVirt VMs

2020-02-10 Thread Jayme
I've been part of this mailing list for a while now and have received a lot
of great advice and help on various subjects. I read the list daily and one
thing I've noticed is that many users are curious about backup options for
oVirt (myself included). I wanted to share with the community a solution
I've come up with to easily backup multiple running oVirt VMs to OVA format
using some basic Ansible playbooks. I've put together a blog post detailing
the process which also includes links to a Github repo containing the
playbooks here:
https://blog.silverorange.com/backing-up-ovirt-vms-with-ansible-4c2fca8b3b43

Any feedback, suggestions or questions are welcome. I hope this information
is helpful.

Thanks!

- Jayme
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U65CV5A6WC6SCB2R5N66Y7HPXQ3ZQT2H/


[ovirt-users] Re: Backup Solution

2020-02-06 Thread Jayme
I understand your concerns and don't have very much personal experience
with geo-replication either, aside from knowing it's recommended in the
RHEV documentation for disaster recovery.  I do believe your specific
concern about replicating issues to the geo replica have been considered
and protected against by delayed writes and other mechanisms but I'm not
experienced enough with it to say how resilient it is.

Keep an eye out on the mailing list, I'm in the process of writing up a
blog post and setting up a github repo to share what I've been doing with
ansible to backup VMs, this simple approach may work for you.  The whole
reason I wanted to find a simple way to backup full VM images was because
of your concerns exactly, I'm worried about a major GlusterFS issue
bringing down all my VMs and I want to be sure I have a way to recover them.

On Thu, Feb 6, 2020 at 2:07 PM Christian Reiss 
wrote:

> Hey Jamie,
>
> thanks for replying. I was wondering about gluster g-rep, but what if
> something that just happened to me (gluster f*ckup) will get replicated
> too. At this point (lost 3 HCI clusters due to Gluster) I am not really
> trusting this piece of software with my live data *and* my backups.
>
> I am really protecting myself against Gluster than anything else. So for
> backup purposes: The less Gluster, the better.
>
> -Chris.
>
> On 06/02/2020 18:31, Jayme wrote:
> > You should look at the gluster georeplication option, I think it would
> > be more appropriate for disaster recovery purposes. It is also possible
> > to export VMs as OVA which can then be reimported back into oVirt. I
> > actually just wrote an ansible playbook to do this very thing and intend
> > to share my finding and playbooks with the ovirt community hopefully
> > this week.
> >
> > On Thu, Feb 6, 2020 at 12:18 PM Christian Reiss
> > mailto:em...@christian-reiss.de>> wrote:
> >
> > Hey folks,
> >
> > Running a 3-way HCI (again (sigh)) on gluster. Now the _inside_ of
> the
> > vms is backup'ed seperatly using bareos on an hourly basis, so files
> > are
> > present with worst case 59 minutes data loss.
> >
> > Now, on the outside I thought of doing gluster snapshots and then
> > syncing those .snap dirs away to a remote 10gig connected machine on
> a
> > weekly-or-so basis. As those contents of the snaps are the oVirt
> images
> > (entire DC) I could re-setup gluster and copy those files back into
> > gluster and be done with it.
> >
> > Now some questions, if I may:
> >
> >- If the hosts remain intact but gluster dies, I simply setup
> > Gluster,
> > stop the ovirt engine (seperate standalone hardware) copy everything
> > back and start ovirt engine again. All disks are accessible again
> > (tested). The bricks are marked as down (new bricks, same name).
> There
> > is a "reset brick" button that made the bricks come back online
> again.
> > What _exactly_ does it do? Does it reset the brick info in oVirt or
> > copy
> > all the data over from another node and really, really reset the
> brick?
> >
> > - If the hosts remain intact, but the engine dies: Can I re-attach
> the
> > engine the the running cluster?
> >
> > - If hosts and engine dies and everything needs to be re-setup would
> it
> > be possible to do the setup wizard(s) again up to a running point
> then
> > copy the disk images to the new gluster-dc-data-dir? Would oVirt
> rescan
> > the dir for newly found vms?
> >
> > - If _one_ host dies, but 2 and the engine remain online: Whats the
> > oVirt way of resetting up the failed one? Reinstalling the node and
> > then
> > what? From all the cases above this is the most likely one.
> >
> > Having had to reinstall the entire Cluster three times already scares
> > me. Always gluster related.
> >
> > Again thank you community for your great efforts!
> >
> >
> > --
> > with kind regards,
> > mit freundlichen Gruessen,
> >
> > Christian Reiss
> > ___
> > Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
> > To unsubscribe send an email to users-le...@ovirt.org
> > <mailto:users-le...@ovirt.org>
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> >
> https://lists.ovir

[ovirt-users] Re: Backup Solution

2020-02-06 Thread Jayme
You should look at the gluster georeplication option, I think it would be
more appropriate for disaster recovery purposes. It is also possible to
export VMs as OVA which can then be reimported back into oVirt. I actually
just wrote an ansible playbook to do this very thing and intend to share my
finding and playbooks with the ovirt community hopefully this week.

On Thu, Feb 6, 2020 at 12:18 PM Christian Reiss 
wrote:

> Hey folks,
>
> Running a 3-way HCI (again (sigh)) on gluster. Now the _inside_ of the
> vms is backup'ed seperatly using bareos on an hourly basis, so files are
> present with worst case 59 minutes data loss.
>
> Now, on the outside I thought of doing gluster snapshots and then
> syncing those .snap dirs away to a remote 10gig connected machine on a
> weekly-or-so basis. As those contents of the snaps are the oVirt images
> (entire DC) I could re-setup gluster and copy those files back into
> gluster and be done with it.
>
> Now some questions, if I may:
>
>   - If the hosts remain intact but gluster dies, I simply setup Gluster,
> stop the ovirt engine (seperate standalone hardware) copy everything
> back and start ovirt engine again. All disks are accessible again
> (tested). The bricks are marked as down (new bricks, same name). There
> is a "reset brick" button that made the bricks come back online again.
> What _exactly_ does it do? Does it reset the brick info in oVirt or copy
> all the data over from another node and really, really reset the brick?
>
> - If the hosts remain intact, but the engine dies: Can I re-attach the
> engine the the running cluster?
>
> - If hosts and engine dies and everything needs to be re-setup would it
> be possible to do the setup wizard(s) again up to a running point then
> copy the disk images to the new gluster-dc-data-dir? Would oVirt rescan
> the dir for newly found vms?
>
> - If _one_ host dies, but 2 and the engine remain online: Whats the
> oVirt way of resetting up the failed one? Reinstalling the node and then
> what? From all the cases above this is the most likely one.
>
> Having had to reinstall the entire Cluster three times already scares
> me. Always gluster related.
>
> Again thank you community for your great efforts!
>
>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/A4IVGFXXYQI4GSFINR4OZVHBYIG3RUQ5/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LSUU36SOV73JOGA2ONBGLP6QJEYNUV4O/


[ovirt-users] Re: Emergency :/ No VMs starting

2020-02-06 Thread Jayme
Appreciate the updates you've been posting.  It's concerning to me as a
Gluster user as well. It would be nice to figure out what happened here.

On Thu, Feb 6, 2020 at 11:43 AM Christian Reiss 
wrote:

> Hey,
>
> For prosperity: Sadly the only way to fix this was to re-init (wipe)
> gluster and start from scratch.
>
> -Chris.
>
> On 03/02/2020 19:23, Strahil Nikolov wrote:
> > On February 3, 2020 2:29:55 PM GMT+02:00, Christian Reiss <
> em...@christian-reiss.de> wrote:
> >> Ugh,
> >>
> >> disregarding off all previous stamenets:
> >>
> >> new findinds: vdsm user can NOT read files larger than 64mb. Root can.
> >>
> >> [vdsm@node02:/rhev/data-cente[...]c51d8a18370] $ for i in 60 62 64 66
> >> 68
> >> ; do dd if=/dev/urandom of=file-$i bs=1M count=$i ; done
> >>
> >> [vdsm@node03:/rhev/data-cente[...]c51d8a18370] $ for i in 60 62 64 66
> >> 68
> >> ; do echo $i ; dd if=file-$i of=/dev/null ; done
> >> 60
> >> 122880+0 records in
> >> 122880+0 records out
> >> 62914560 bytes (63 MB) copied, 0.15656 s, 402 MB/s
> >> 62
> >> 126976+0 records in
> >> 126976+0 records out
> >> 65011712 bytes (65 MB) copied, 0.172463 s, 377 MB/s
> >> 64
> >> 131072+0 records in
> >> 131072+0 records out
> >> 67108864 bytes (67 MB) copied, 0.180701 s, 371 MB/s
> >> 66
> >> dd: error reading ‘file-66’: Permission denied
> >> 131072+0 records in
> >> 131072+0 records out
> >> 67108864 bytes (67 MB) copied, 0.105236 s, 638 MB/s
> >> 68
> >> dd: error reading ‘file-68’: Permission denied
> >> 131072+0 records in
> >> 131072+0 records out
> >> 67108864 bytes (67 MB) copied, 0.17046 s, 394 MB/s
> >>
> >>
> >> The files appeared instantly on all nodes, Writing large files work,
> >> however. Writing large files seem to work.
> >>
> >> I think this is the core issue.
> >>
> >>
> >> On 03/02/2020 12:22, Christian Reiss wrote:
> >>> Further findings:
> >>>
> >>> - modified data gets written to local node, not across gluster.
> >>> - vdsm user can create _new_ files on the cluster, this gets synced
> >>> immediatly.
> >>> - vdsm can modify, across all nodes newly created files, changes
> >> apply
> >>> immediately.
> >>>
> >>> I think vdsm user can not modify already existing files over the
> >>> gluster. Something selinux?
> >>>
> >>> -Chris.
> >>>
> >>> On 03/02/2020 11:46, Christian Reiss wrote:
>  Hey,
> 
>  I think I am barking up the right tree with something (else) here;
>  Note the timestamps & id's:
> 
> 
>  dd'ing a disk image as vdsm user, try 1:
> 
> 
> >> [vdsm@node03:/rhev/data-center/mnt/glusterSD/node01.dc-dus.dalason.net:
> _ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/4a55b9c0-d550-4ecb-8dd1-cc1f24f2c7ac]
> >>
>  $ date ; id ; dd if=5fca6d0e-e320-425b-a89a-f80563461add | pv  | dd
>  of=/dev/null
>  Mon  3 Feb 11:39:13 CET 2020
>  uid=36(vdsm) gid=36(kvm) groups=36(kvm),107(qemu),179(sanlock)
>  context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
>  dd: error reading ‘5fca6d0e-e320-425b-a89a-f80563461add’: Permission
> >>
>  denied
>  131072+0 records in
>  131072+0 records out
>  67108864 bytes (67 MB) copied, 0.169465 s, 396 MB/s
>  64MiB 0:00:00 [ 376MiB/s] [  <=>
> ]
>  131072+0 records in
>  131072+0 records out
>  67108864 bytes (67 MB) copied, 0.171726 s, 391 MB/s
> 
> 
>  try 2, directly afterward:
> 
> 
> >> [vdsm@node03:/rhev/data-center/mnt/glusterSD/node01.dc-dus.dalason.net:
> _ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/4a55b9c0-d550-4ecb-8dd1-cc1f24f2c7ac]
> >>
>  $ date ; id ; dd if=5fca6d0e-e320-425b-a89a-f80563461add | pv  | dd
>  of=/dev/null
>  Mon  3 Feb 11:39:16 CET 2020
>  uid=36(vdsm) gid=36(kvm) groups=36(kvm),107(qemu),179(sanlock)
>  context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
>  dd: error reading ‘5fca6d0e-e320-425b-a89a-f80563461add’: Permission
> >>
>  denied
>  131072+0 records in
>  131072+0 records out
>  67108864 bytes (67 MB) copied, 0.148846 s, 451 MB/s
>  64MiB 0:00:00 [ 427MiB/s] [  <=>
> ]
>  131072+0 records in
>  131072+0 records out
>  67108864 bytes (67 MB) copied, 0.149589 s, 449 MB/s
> 
> 
>  try same as root:
> 
> 
> >> [root@node03:/rhev/data-center/mnt/glusterSD/node01.dc-dus.dalason.net:
> _ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/4a55b9c0-d550-4ecb-8dd1-cc1f24f2c7ac]
> >>
>  # date ; id ; dd if=5fca6d0e-e320-425b-a89a-f80563461add | pv  | dd
>  of=/dev/null
>  Mon  3 Feb 11:39:33 CET 2020
>  uid=0(root) gid=0(root) groups=0(root)
>  context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
>  50GiB 0:03:06 [ 274MiB/s] [  <=>
> ]
>  104857600+0 records in
>  104857600+0 records out
>  53687091200 bytes (54 GB) copied, 186.501 s, 288 MB/s
>  104857600+0 records in
>  104857600+0 records out
>  

[ovirt-users] Re: disk snapshot status is Illegal

2020-02-04 Thread Jayme
Hello,

I believe you should be able to fix this issue using the "unlock_entity.sh"
tool on the hosted engine VM in "/usr/share/ovirt-engine/setup/dbutils" --
unfortunately there is not much documentation on it but I IIRC I've used it
to fix this very issue in the past.  Someone else may be able to chime in
on its proper use.

On Tue, Feb 4, 2020 at 11:07 AM Crazy Ayansh 
wrote:

> Hey Guys,
>
> Any help on it ?
>
> Thanks
>
> On Tue, Feb 4, 2020 at 4:04 PM Crazy Ayansh 
> wrote:
>
>>
>>   Hi Team,
>>
>> I am trying to delete a old snapshot of a virtual machine and getting
>> below error :-
>>
>> failed to delete snapshot 'snapshot-ind-co-ora-02' for VM
>> índ-co-ora-ee-02'
>>
>>
>>
>> [image: image.png]
>>
>> Thanks
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7OR4HQEKNJURWYCWURCOHAUUFCMYUW6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4J3ZF3N52GHYP2QOW7IWA5LPNJMIXPMJ/


[ovirt-users] Re: Power Management - drac5

2020-02-03 Thread Jayme
Also make sure you have "Enable IPMI Over LAN" enabled under idrac settings.

On Mon, Feb 3, 2020 at 2:15 PM Jayme  wrote:

> I recall having a problem similar to this before and it was related to the
> user roles/permissions in iDrac.  Check what access rights the user has.
> If that leads no where you might have some luck testing manually using the
> fence_idrac5 CLI tool directly on one of the oVirt hosts
>
> On Mon, Feb 3, 2020 at 2:09 PM Robert Webb  wrote:
>
>> I have 3 Dell R410's with iDrac6 Enterprise capability. I am trying to
>> get power management set up but the test will not pass and I am not finding
>> the docs very helpful.
>>
>> I have put in the IP, user name, password, and drac5 as the type. I have
>> tested both with and without secure checked and always get, "Test failed:
>> Internal JSON-RPC error".
>>
>> idrac log shows:
>>
>> 2020 Feb 3 17:41:22 os[19772]   root closing session from
>> 192.168.1.12
>> 2020 Feb 3 17:41:17 os[19746]   root login from 192.168.1.12
>>
>> Can someone please guide me in the right direction?
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RKFEK2ORWOODCFHYTA6WILQ7MIO2VPI2/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A2ESGMDJVDX7B6RFSL4JWJDOATWHCFPR/


[ovirt-users] Re: Power Management - drac5

2020-02-03 Thread Jayme
I recall having a problem similar to this before and it was related to the
user roles/permissions in iDrac.  Check what access rights the user has.
If that leads no where you might have some luck testing manually using the
fence_idrac5 CLI tool directly on one of the oVirt hosts

On Mon, Feb 3, 2020 at 2:09 PM Robert Webb  wrote:

> I have 3 Dell R410's with iDrac6 Enterprise capability. I am trying to get
> power management set up but the test will not pass and I am not finding the
> docs very helpful.
>
> I have put in the IP, user name, password, and drac5 as the type. I have
> tested both with and without secure checked and always get, "Test failed:
> Internal JSON-RPC error".
>
> idrac log shows:
>
> 2020 Feb 3 17:41:22 os[19772]   root closing session from
> 192.168.1.12
> 2020 Feb 3 17:41:17 os[19746]   root login from 192.168.1.12
>
> Can someone please guide me in the right direction?
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RKFEK2ORWOODCFHYTA6WILQ7MIO2VPI2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OBHBD3SZDXQNB3JPWR2VLX4U7JZHRR7G/


[ovirt-users] Re: Snapshots not possible

2020-02-03 Thread Jayme
Ah, the bug I'm referring to may only apply to replica 3 gluster.  You
appear to be using an arbiter. It sounds like you may need to file a bug
for this one

On Mon, Feb 3, 2020 at 12:05 PM Christoph Köhler <
koeh...@luis.uni-hannover.de> wrote:

> Hello Jayme,
>
> the gluster-config is this:
>
> gluster volume info gluvol3
>
> Volume Name: gluvol3
> Type: Replicate
> Volume ID: 8172ebea-c118-424a-a407-50b2fd87e372
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: glusrv01:/gluster/p1/brick1
> Brick2: glusrv02:/gluster/p1/brick1
> Brick3: glusrv03:/gluster/p1/brick1 (arbiter)
> Options Reconfigured:
> performance.client-io-threads: off
> nfs.disable: on
> transport.address-family: inet
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.low-prio-threads: 32
> network.remote-dio: off
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 1
> features.shard: on
> user.cifs: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> performance.strict-o-direct: on
> cluster.granular-entry-heal: enable
> network.ping-timeout: 8
> auth.allow: 192.168.11.*
> client.event-threads: 4
> cluster.background-self-heal-count: 128
> cluster.heal-timeout: 60
> cluster.heal-wait-queue-length: 1280
> features.shard-block-size: 256MB
> performance.cache-size: 4096MB
> server.event-threads: 4
>
> I really do not know what to do new...
>
> Chris
>
> On 03.02.20 16:53, Jayme wrote:
> > Chris, what is the storage configuration?  I was under the impression
> > that there was a bug preventing snapshots from working when using
> > libgfapi on gluster replica configurations.  This is one of the main
> > reasons why I have been unable to implement libgfapi.
> >
> > On Mon, Feb 3, 2020 at 10:53 AM Christoph Köhler
> > mailto:koeh...@luis.uni-hannover.de>>
> wrote:
> >
> > Hi,
> >
> > since we have updated to 4.3.7 and another cluster to 4.3.8 snapshots
> > are not longer possible. In previous version all went well...
> >
> > ° libGfApi enabled
> > ° gluster 6.7.1 on gluster-server and client
> > ° libvirt-4.5.0-23.el7_7.3
> >
> > vdsm on a given node says:
> >
> > jsonrpc/2) [vds] prepared volume path:
> >
>  
> gluvol3/e54d835a-d8a5-44ae-8e17-fcba1c54e46f/images/1f43916a-bbf2-447b-b17d-ba22d4ec8c90/0e56d498-11d2-4f35-b781-a2e06d286eb8
> >
> > (clientIF:510)
> >
> > (jsonrpc/2) [virt.vm] (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73')
> > 
> >  > type="network"> >
>  
> name="gluvol3/e54d835a-d8a5-44ae-8e17-fcba1c54e46f/images/1f43916a-bbf2-447b-b17d-ba22d4ec8c90/0e56d498-11d2-4f35-b781-a2e06d286eb8"
> >
> > protocol="gluster" type="network"> > /> (vm:4497)
> >
> > (jsonrpc/2) [virt.vm] (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73')
> > Disabling drive monitoring (drivemonitor:60)
> >
> > (jsonrpc/2) [virt.vm] (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73')
> > Freezing guest filesystems (vm:4268)
> > WARN  (jsonrpc/2) [virt.vm]
> > (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73') Unable to freeze guest
> > filesystems: Guest agent is not responding: QEMU guest agent is not
> > connected (vm:4273)
> > INFO  (jsonrpc/2) [virt.vm]
> > (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73') Taking a live snapshot
> > (drives=sda, memory=True) (vm:4513)
> > ...
> > ...
> >
> > ERROR (jsonrpc/2) [virt.vm]
> > (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73') Unable to take snapshot
> > (vm:4517)
> > Traceback (most recent call last):
> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 4514,
> > in snapshot
> >   self._dom.snapshotCreateXML(snapxml, snapFlags)
> > File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
> > line
> > 100, in f
> >   ret = attr(*args, **kwargs)
> > File
> > "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
> > line 131, in wrapper
> >   ret = f(*args, **kwargs)
> > File "/usr/lib/python2.7/site-packages/vdsm/common/function.py",
> >

[ovirt-users] Re: Snapshots not possible

2020-02-03 Thread Jayme
Chris, what is the storage configuration?  I was under the impression that
there was a bug preventing snapshots from working when using libgfapi on
gluster replica configurations.  This is one of the main reasons why I have
been unable to implement libgfapi.

On Mon, Feb 3, 2020 at 10:53 AM Christoph Köhler <
koeh...@luis.uni-hannover.de> wrote:

> Hi,
>
> since we have updated to 4.3.7 and another cluster to 4.3.8 snapshots
> are not longer possible. In previous version all went well...
>
> ° libGfApi enabled
> ° gluster 6.7.1 on gluster-server and client
> ° libvirt-4.5.0-23.el7_7.3
>
> vdsm on a given node says:
>
> jsonrpc/2) [vds] prepared volume path:
> gluvol3/e54d835a-d8a5-44ae-8e17-fcba1c54e46f/images/1f43916a-bbf2-447b-b17d-ba22d4ec8c90/0e56d498-11d2-4f35-b781-a2e06d286eb8
>
> (clientIF:510)
>
> (jsonrpc/2) [virt.vm] (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73')
> 
>  type="network"> name="gluvol3/e54d835a-d8a5-44ae-8e17-fcba1c54e46f/images/1f43916a-bbf2-447b-b17d-ba22d4ec8c90/0e56d498-11d2-4f35-b781-a2e06d286eb8"
>
> protocol="gluster" type="network"> /> (vm:4497)
>
> (jsonrpc/2) [virt.vm] (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73')
> Disabling drive monitoring (drivemonitor:60)
>
> (jsonrpc/2) [virt.vm] (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73')
> Freezing guest filesystems (vm:4268)
> WARN  (jsonrpc/2) [virt.vm]
> (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73') Unable to freeze guest
> filesystems: Guest agent is not responding: QEMU guest agent is not
> connected (vm:4273)
> INFO  (jsonrpc/2) [virt.vm]
> (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73') Taking a live snapshot
> (drives=sda, memory=True) (vm:4513)
> ...
> ...
>
> ERROR (jsonrpc/2) [virt.vm]
> (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73') Unable to take snapshot
> (vm:4517)
> Traceback (most recent call last):
>File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4514,
> in snapshot
>  self._dom.snapshotCreateXML(snapxml, snapFlags)
>File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 100, in f
>  ret = attr(*args, **kwargs)
>File
> "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
> line 131, in wrapper
>  ret = f(*args, **kwargs)
>File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line
> 94, in wrapper
>  return func(inst, *args, **kwargs)
>File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2620, in
> snapshotCreateXML
>  if ret is None:raise libvirtError('virDomainSnapshotCreateXML()
> failed', dom=self)
> libvirtError: internal error: unable to execute QEMU command
> 'transaction': Could not read L1 table: Input/output error
> ...
> INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call VM.snapshot failed
> (error 48) in 4.65 seconds (__init__:312)
>
> It seems that the origin is libvirt or qemu.
>
> Regards
> Chris
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2HZEFV4GBUBLLIDYMWJEO26A2O3M6XGJ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AV6M63YPIXOLAFKH25DPLJKSKXTPD32J/


[ovirt-users] Re: Emergency :/ No VMs starting

2020-02-02 Thread Jayme
I checked my HCI cluster and those permissions seem to match what I'm
seeing.  Since there's no VMs running currently have you tried restarting
the gluster volumes as well as the glusterd service? I'm not sure what
would have caused this with one host placed in maintenance.

On Sun, Feb 2, 2020 at 9:35 PM Christian Reiss 
wrote:

> Thanks for replying.
>
> /gluster_bricks/ssd_storage/ssd_storage/.shard is root:root 0660,
>
> [root@node03:/gluster_bricks/ssd_storage/ssd_storage] # l
> total 5.8M
> drwxr-xr-x.   5 vdsm kvm98 Feb  3 02:31 .
> drwxr-xr-x.   3 root root   25 Jan  9 15:49 ..
> drwxr-xr-x.   5 vdsm kvm64 Feb  3 00:31
> fec2eb5e-21b5-496b-9ea5-f718b2cb5556
> drw---. 262 root root 8.0K Jan  9 16:50 .glusterfs
> drwxr-xr-x.   3 root root 4.7M Feb  3 00:31 .shard
>
>
> [root@node03:/gluster_bricks/ssd_storage] # l
> total 8.0K
> drwxr-xr-x. 3 root root   25 Jan  9 15:49 .
> drwxr-xr-x. 3 root root 4.0K Jan  9 15:49 ..
> drwxr-xr-x. 5 vdsm kvm98 Feb  3 02:31 ssd_storage
>
>
> [root@node03:/gluster_bricks] # l
> total 8.0K
> drwxr-xr-x.  3 root root 4.0K Jan  9 15:49 .
> dr-xr-xr-x. 21 root root 4.0K Feb  3 00:03 ..
> drwxr-xr-x.  3 root root   25 Jan  9 15:49 ssd_storage
>
>
> [root@node03:/] # l
> total 348K
> drwxr-xr-x.   3 root root 4.0K Jan  9 15:49 gluster_bricks
>
>
>
> And
>
> [root@node03:/rhev/data-center/mnt/glusterSD/node01.dc-dus.dalason.net:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images]
>
> # l
> total 345K
> drwxr-xr-x. 46 vdsm kvm 8.0K Feb  2 23:18 .
> drwxr-xr-x.  5 vdsm kvm   64 Feb  3 00:31 ..
> drwxr-xr-x.  2 vdsm kvm 8.0K Jan 17 15:54
> 0b21c949-7133-4b34-b909-a6660ae12800
> drwxr-xr-x.  2 vdsm kvm  165 Feb  3 01:48
> 0dde79ab-d773-4d23-b397-7c39371ccc60
> drwxr-xr-x.  2 vdsm kvm 8.0K Jan 17 09:49
> 1347d489-012b-40fc-acb5-d00a9ea133a4
> drwxr-xr-x.  2 vdsm kvm 8.0K Jan 22 15:04
> 1ccc4db6-f47d-4474-b0fa-a0c1eddb0fa7
> drwxr-xr-x.  2 vdsm kvm 8.0K Jan 21 16:28
> 22cab044-a26d-4266-9af7-a6408eaf140c
> drwxr-xr-x.  2 vdsm kvm 8.0K Jan 30 06:03
> 288d061a-6c6c-4536-a594-3bede63c0654
> drwxr-xr-x.  2 vdsm kvm 8.0K Jan  9 16:46
> 40c51753-1533-45ab-b9de-2c51d8a18370
>
>
> Containing files as well.
>
>
> On 03/02/2020 02:27, Jayme wrote:
> > The log appears to indicate that there may be a permissions issue.  What
> > is the ownership and permissions on your gluster brick dirs and mounts?
>
> --
>   Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
> supp...@alpha-labs.net   \ /Campaign
>   X   against HTML
>   WEB alpha-labs.net / \   in eMails
>
>   GPG Retrieval https://gpg.christian-reiss.de
>   GPG ID ABCD43C5, 0x44E29126ABCD43C5
>   GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5
>
>   "It's better to reign in hell than to serve in heaven.",
>John Milton, Paradise lost.
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A6TVHRUADUVWN2XTE6IRENF74ZDDPWUM/


[ovirt-users] Re: Emergency :/ No VMs starting

2020-02-02 Thread Jayme
The log appears to indicate that there may be a permissions issue.  What is
the ownership and permissions on your gluster brick dirs and mounts?

On Sun, Feb 2, 2020 at 8:21 PM Christian Reiss 
wrote:

> Hey folks,
>
> oh Jesus. 3-Way HCI. Gluster w/o any issues:
>
> [root@node01:/var/log/glusterfs] # gluster vol info  ssd_storage
>
> Volume Name: ssd_storage
> Type: Replicate
> Volume ID: d84ec99a-5db9-49c6-aab4-c7481a1dc57b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: node01.company.com:/gluster_bricks/ssd_storage/ssd_storage
> Brick2: node02.company.com:/gluster_bricks/ssd_storage/ssd_storage
> Brick3: node03.company.com:/gluster_bricks/ssd_storage/ssd_storage
> Options Reconfigured:
> performance.client-io-threads: on
> nfs.disable: on
> transport.address-family: inet
> performance.strict-o-direct: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.low-prio-threads: 32
> network.remote-dio: off
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 1
> features.shard: on
> user.cifs: off
> cluster.choose-local: off
> client.event-threads: 4
> server.event-threads: 4
> network.ping-timeout: 30
> storage.owner-uid: 36
> storage.owner-gid: 36
> cluster.granular-entry-heal: enab
>
>
> [root@node01:/var/log/glusterfs] # gluster vol status  ssd_storage
> Status of volume: ssd_storage
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> --
> Brick node01.company.com:/gluster_br
> icks/ssd_storage/ssd_storage49152 0  Y
> 63488
> Brick node02.company.com:/gluster_br
> icks/ssd_storage/ssd_storage49152 0  Y
> 18860
> Brick node03.company.com:/gluster_br
> icks/ssd_storage/ssd_storage49152 0  Y
> 15262
> Self-heal Daemon on localhost   N/A   N/AY
> 63511
> Self-heal Daemon on node03.dc-dus.dalason.n
> et  N/A   N/AY
> 15285
> Self-heal Daemon on 10.100.200.12   N/A   N/AY
> 18883
>
> Task Status of Volume ssd_storage
>
> --
> There are no active volume tasks
>
>
>
> [root@node01:/var/log/glusterfs] # gluster vol heal ssd_storage info
> Brick node01.company.com:/gluster_bricks/ssd_storage/ssd_storage
> Status: Connected
> Number of entries: 0
>
> Brick node02.company.com:/gluster_bricks/ssd_storage/ssd_storage
> Status: Connected
> Number of entries: 0
>
> Brick node03.company.com:/gluster_bricks/ssd_storage/ssd_storage
> Status: Connected
> Number of entries: 0
>
>
>
> And everything is mounted where its supposed to. But no VMs start due to
> IO Error. I checked a gluster-based file (CentOS iso) md5 against a
> local copy, it matches. One VM at one point managed to start, but failed
> subsequent starts. The data/disks seem okay,
>
> /var/log/glusterfs/"rhev-data-center-mnt-glusterSD-node01.company.com:_ssd__storage.log-20200202"
>
> has entries like:
>
>
> [2020-02-01 23:15:15.449902] W [MSGID: 114031]
> [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-ssd_storage-client-1:
> remote operation failed. Path:
> /.shard/86da0289-f74f-4200-9284-678e7bd76195.1405
> (----) [Permission denied]
> [2020-02-01 23:15:15.484363] W [MSGID: 114031]
> [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-ssd_storage-client-1:
> remote operation failed. Path:
> /.shard/86da0289-f74f-4200-9284-678e7bd76195.1400
> (----) [Permission denied]
>
>
> Before this happened we put one host into maintenance mode, it all
> started during migration.
>
> Any help? We're sweating blood here.
>
>
>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VJUJK7USH2BV4ZXLFXAA7EJMUVAUGIF4/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C5WRXWWSDJK6OGPZVCEOHZM4IIMAJ2XQ/


[ovirt-users] Re: Gluster Heal Issue

2020-01-31 Thread Jayme
I have run into this exact issue before and resolved it by simply syncing
over the missing files and running a heal on the volume (can take a little
time to correct)


On Fri, Jan 31, 2020 at 7:05 PM Christian Reiss 
wrote:

> Hey folks,
>
> in our production setup with 3 nodes (HCI) we took one host down
> (maintenance, stop gluster, poweroff via ssh/ovirt engine). Once it was
> up the gluster hat 2k healing entries that went down in a matter on 10
> minutes to 2.
>
> Those two give me a headache:
>
> [root@node03:~] # gluster vol heal ssd_storage info
> Brick node01:/gluster_bricks/ssd_storage/ssd_storage
> 
> 
> Status: Connected
> Number of entries: 2
>
> Brick node02:/gluster_bricks/ssd_storage/ssd_storage
> Status: Connected
> Number of entries: 0
>
> Brick node03:/gluster_bricks/ssd_storage/ssd_storage
> 
> 
> Status: Connected
> Number of entries: 2
>
> No paths, only gfid. We took down node2, so it does not have the file:
>
> [root@node01:~] # md5sum
>
> /gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
> 75c4941683b7eabc223fc9d5f022a77c
>
> /gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
>
> [root@node02:~] # md5sum
>
> /gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
> md5sum:
> /gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6:
>
> No such file or directory
>
> [root@node03:~] # md5sum
>
> /gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
> 75c4941683b7eabc223fc9d5f022a77c
>
> /gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
>
> The other two files are md5-identical.
>
> These flags are identical, too:
>
> [root@node01:~] # getfattr -d -m . -e hex
>
> /gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
> getfattr: Removing leading '/' from absolute path names
> # file:
>
> gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
>
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x
> trusted.afr.ssd_storage-client-1=0x004f0001
> trusted.gfid=0xa121e4fb09844e4194d78f0c4f87f4b6
>
> trusted.gfid2path.d4cf876a215b173f=0x62653331383633382d653861302d346336642d393737642d3761393337616138343830362f38366461303238392d663734662d343230302d393238342d3637386537626437363139352e31323030
>
> trusted.glusterfs.mdata=0x015e349b1e1139aa2a5e349b1e1139aa2a5e349949304a5eb2
>
> getfattr: Removing leading '/' from absolute path names
> # file:
>
> gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
>
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x
> trusted.afr.ssd_storage-client-1=0x004f0001
> trusted.gfid=0xa121e4fb09844e4194d78f0c4f87f4b6
>
> trusted.gfid2path.d4cf876a215b173f=0x62653331383633382d653861302d346336642d393737642d3761393337616138343830362f38366461303238392d663734662d343230302d393238342d3637386537626437363139352e31323030
>
> trusted.glusterfs.mdata=0x015e349b1e1139aa2a5e349b1e1139aa2a5e349949304a5eb2
>
> Now, I dont dare simply proceeding withouth some advice.
> Anyone got a clue on who to resolve this issue? File #2 is identical to
> this one, from a problem point of view.
>
> Have a great weekend!
> -Chris.
>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FGIQFIRC6QYN4AYB3NRPM42KX4ENIF2A/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4DYDEYRC57V3CZB6Z5RVBXKAV3LLIXDS/


[ovirt-users] Re: 0virt VMs status down after host reboot

2020-01-29 Thread Jayme
Hello,

It's my understanding that the engine will make every attempt at restarting
highly available VMs. All of my VMs are highly available and none have
never not started after rebooting hosts.


On Wed, Jan 29, 2020 at 9:28 AM Eugène Ngontang  wrote:

> I was looking if the "High Availability" option could be used for
> automatic startup, but Ovirt documentation is pretty clear about it as
> explained here <https://www.ovirt.org/develop/ha-vms.html> you the
> documentation in the screenshot.
>
> I was wondering if there may be a flag that controls the VMs startup
> behavior...
>
> Le mer. 29 janv. 2020 à 12:07, Jayme  a écrit :
>
>> Check if highly available is selected in vm configuration
>>
>> On Wed, Jan 29, 2020 at 2:55 AM Eugène Ngontang 
>> wrote:
>>
>>> Hi all,
>>>
>>> I've set up an infrastructure with OVirt, using self-hosted engine.
>>>
>>> I use some ansible scripts from my Virtualization Host (the physical
>>> machine), to bootstrap the hosted engine, and create a set of virtual
>>> machines on which I deploy a k8s cluster.
>>>
>>> The deployment goes well, and everything is OK.
>>>
>>> Now I'm doing some reboot tests, and when I reboot the physical server,
>>> only the hosted-engine vm is up after the reboot, the rest of VMs and thus
>>> the k8s cluster are down.
>>>
>>> Had someone here ever experienced this issue? What can cause it and how
>>> to automate the virtual machines startup in RHVE/Ovirt?
>>>
>>> Thanks.
>>>
>>> Regards,
>>> Eugene
>>>
>>>
>>> --
>>> LesCDN <http://lescdn.com>
>>> engont...@lescdn.com
>>> 
>>> *Aux hommes il faut un chef, et au*
>>>
>>> * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on
>>> te voit on te juge!*
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YWVB43KDNMXVIZCCIZJI5EOJGZ7ATLZK/
>>>
>>
>
> --
> LesCDN <http://lescdn.com>
> engont...@lescdn.com
> 
> *Aux hommes il faut un chef, et au*
>
> * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te
> voit on te juge!*
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQHRALMLV7D3JZ4JHIQFLTTOZ57PE7N5/


[ovirt-users] Re: 0virt VMs status down after host reboot

2020-01-29 Thread Jayme
Check if highly available is selected in vm configuration

On Wed, Jan 29, 2020 at 2:55 AM Eugène Ngontang  wrote:

> Hi all,
>
> I've set up an infrastructure with OVirt, using self-hosted engine.
>
> I use some ansible scripts from my Virtualization Host (the physical
> machine), to bootstrap the hosted engine, and create a set of virtual
> machines on which I deploy a k8s cluster.
>
> The deployment goes well, and everything is OK.
>
> Now I'm doing some reboot tests, and when I reboot the physical server,
> only the hosted-engine vm is up after the reboot, the rest of VMs and thus
> the k8s cluster are down.
>
> Had someone here ever experienced this issue? What can cause it and how to
> automate the virtual machines startup in RHVE/Ovirt?
>
> Thanks.
>
> Regards,
> Eugene
>
>
> --
> LesCDN 
> engont...@lescdn.com
> 
> *Aux hommes il faut un chef, et au*
>
> * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te
> voit on te juge!*
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YWVB43KDNMXVIZCCIZJI5EOJGZ7ATLZK/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NMPIX2LRI24ZERYVAQRHYYC3TCH7DJDO/


[ovirt-users] Re: command line vm start/stop

2020-01-27 Thread Jayme
Hello,

I believe the best way would be to use ansible with the ovirt ansible
modules i.e.
https://docs.ansible.com/ansible/latest/modules/ovirt_vm_module.html#ovirt-vm-module
--
you can do it with a simple task like:

- name: Stop vm
  ovirt_vm:
state: stopped
name: myvm

you could also use "virsh" on the command line to stop VMs but I'd
stick with using ansible personally.


On Mon, Jan 27, 2020 at 10:41 AM  wrote:

> Hello Experts.
>
> In version  3.5.2.1-1.el6, we used an "ovirt-shell -E action..." command
> to start/stop virtual machines from command line. In version 4.3.7.2-1.el7
> ovirt-shell is deprecated. Please advise how to start/stop them from
> command line. vdsm-client provides only destroy/shutdown/reset/cont,
> nothing about startvm or poweron.
>
> Regards.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRNBIPH42NYX7VY7YBXO235R2HKQOCRH/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/72ODAYQUBD26HOOGOYN7Q6DTPKV2V2JW/


[ovirt-users] Re: ovirt_vm ansible module -- how to wait for ova export to finish

2020-01-24 Thread Jayme
I ran into one snag when testing full backups overnight.  Exporting a large
VM failed. I checked logs and discovered that it was due to an ansible
timeout, but not in my playbook.  I increased the timeout by
creating /etc/ovirt-engine/engine.conf.d/99-ansible-playbook-timeout.conf
on hosted engine and will give it another try.

On Thu, Jan 23, 2020 at 11:52 AM Jayme  wrote:

> Jan,
>
> I just ran a quick test with your suggestion and it seems like it is
> working as intended.  I need to do more testing with it but it looks like
> this may well be a viable solution:
>
> ---
> - hosts: localhost
>   connection: local
>
>   vars:
> host: hostX
> cluster: default
> directory: '/backup/'
> ova_ext: ova
> vms:
>   - ovatest
>   - ovatest2
>
>   tasks:
> - name: Backup VMs
>   include_tasks: backup-test.yml
>   loop: "{{ vms }}"
>
>
> backup-test.yml:
>
> ---
>
> - name: "Export VM to OVA"
>   ovirt_vm:
> auth: "{{ ovirt_auth }}"
> name: "{{ item }}"
> state: exported
> cluster: "{{ cluster }}"
> export_ova:
> host: "{{ host }}"
> filename: "{{ item }}.{{ ova_ext }}"
> directory: "{{ directory }}"
>
> - name: "Wait for export to finish"
>   wait_for:
> path: "/backup/vm.ova"  # will change to using vars here
>
> The backup folder isn't accessible from where I'm running ansible from so
> I will need to check it remotely but that should be easy to solve.
>
> On Thu, Jan 23, 2020 at 11:43 AM Jan Zmeskal  wrote:
>
>> Hi Jayme,
>>
>> let us know how it went. Anyway, if you ever run into hard timeout, don't
>> despair. It also happened to me once when working with Ansible oVirt
>> modules and I just created an issue on GitHub where I requested this
>> timeout to be changed from hard-coded value to a configurable parameter
>> with some reasonable default. It was implemented rather quickly.
>>
>> Jan
>>
>> On Thu, Jan 23, 2020 at 4:32 PM Jayme  wrote:
>>
>>> That may work since the file will be tmp until finished being written. I
>>> was also just looking at the ovirt event info module
>>> https://docs.ansible.com/ansible/latest/modules/ovirt_event_info_module.html#ovirt-event-info-module
>>>  --
>>> I was thinking that I might be able to watch the event info wait on the
>>> event which shows the export was successful i.e. Vm X was exported
>>> successfully as a Virtual Appliance to path
>>>
>>> There is also the event index which could be useful in terms of getting
>>> a starting point for the event search.
>>>
>>> I thought there would be a module or API for the running oVirt task list
>>> but so far I haven't been able to find any way to get info on oVirt tasks.
>>>
>>> I'll see if I can get something working with your suggestion and keep
>>> looking at API and ansible modules to see which make sense to use.
>>>
>>> I'm also worried that timeout issues may occur if I start waiting in
>>> some cases an hour or more for very large VM backups to complete before
>>> moving on to the next with ansible.
>>>
>>> Thanks!
>>>
>>> Jayme
>>>
>>> On Thu, Jan 23, 2020 at 10:00 AM Jan Zmeskal 
>>> wrote:
>>>
>>>> Hi Jayme,
>>>>
>>>> here's my idea. I haven't tested it but I believe it should work.
>>>> 1. Create a new task file (let's call it export_vm.yaml) and include
>>>> two tasks in there:
>>>> 1.1. First task uses ovirt_vm module (pretty much what you already
>>>> have) to export VM
>>>> 1.2. Second task uses wait_for
>>>> <https://docs.ansible.com/ansible/latest/modules/wait_for_module.html>
>>>> module (specifically its path parameter) to wait until the OVA file in
>>>> /backup exists
>>>> 2. Loop over those two tasks as explained here
>>>> <https://ericsysmin.com/2019/06/20/how-to-loop-blocks-of-code-in-ansible/>
>>>> .
>>>>
>>>> Hope this helps.
>>>>
>>>> Jan
>>>>
>>>> On Wed, Jan 22, 2020 at 4:15 PM Jayme  wrote:
>>>>
>>>>> I wrote a simple task that is using the ovirt_vm module
>>>>> https://docs.ansible.com/ansible/latest/modules/ovirt_vm_module.html --
>>>>> it essentially loops over a list of vms and exports them to OVA.
>>>>>
>>>>

[ovirt-users] Re: API OVA export - getting job id/job status

2020-01-24 Thread Jayme
Ansible can be daunting simply because of how powerful it is, but it’s
actually quite easy to run a simple playbook like the one I’m writing for
backups, especially if ran from ovirt hosted engine as it already has
ansible and all the needed dependencies installed.

I’m working on throwing together a guide with more detailed info very soon
and will post it to this group when done.



On Fri, Jan 24, 2020 at 7:31 AM  wrote:

> Hi Jan,
>
> i've seen this post too, but i've absolutely no idea of dealing with
> ansible, so i can't say much in this topic.
> The hope was, that i made a mistake in my script :)
>
> Lars
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EOPA5GFDIAAHS5BAVGSUT3QUFOCE6CTC/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CCRL52N47A4QKBR6GOBZJJSOAK7PAIV3/


[ovirt-users] Re: API OVA export - getting job id/job status

2020-01-24 Thread Jayme
Indeed. A felt like there was need for a simple way to backup ovirt vms.
Ansible may just be the answer. After some more testing of the playbook I
plan to publish a blog post/guide on the subject so others can use it.



On Fri, Jan 24, 2020 at 4:42 AM Jan Zmeskal  wrote:

> Hi Lars, you might find this email thread interesting:
> https://lists.ovirt.org/archives/list/users@ovirt.org/thread/PXYAQ7YEBQCUWCAQCFAFXB3545LNB23X/
>
> Jayme is trying to solve pretty much the same problem as you - although
> he's using the Ansible approach instead of SDK. Feel free to join that
> conversation. At this point it seems like he might have found a good
> solution, but he needs to test it.
>
> Jan
>
> On Fri, Jan 24, 2020 at 8:33 AM  wrote:
>
>> hi,
>>
>> I tryed this with API 4.2 and 4.3.
>> purpose of the following script is, to export a given list of vm as OVA
>> one after another.
>> To reach that i need to monitor the job status and pause the script till
>> the actual export is done.
>> The script works fine, but not the restriction of the returned jobs to
>> the one spcificaly job i need to monitor.
>> Therefor the script pauses on *any* running job.
>> the working script:
>>
>> #!/usr/bin/python
>>
>> import logging
>> import time
>>
>> import ovirtsdk4 as sdk
>> import ovirtsdk4.types as types
>>
>> connection = sdk.Connection(
>> url='https://ovirtman12/ovirt-engine/api',
>> username='admin@internal',
>> password='***',
>> ca_file='/etc/pki/ovirt-engine/ca-ovirtman12.pem',
>> )
>>
>>
>> hosts_service = connection.system_service().hosts_service()
>> hosts = hosts_service.list()[0]
>>
>> vms_service = connection.system_service().vms_service()
>> vms = vms_service.list(search='name=blxlic954')
>>
>> for vm in vms:
>> # print("%s (%s)" % (vm.name, vm.id))
>> vm_service = vms_service.vm_service(vm.id)
>> start_time = (time.strftime('%Y%m%d_%H%M%S',
>> time.localtime(int(time.time()
>> vm_service.export_to_path_on_host(
>> host=types.Host(id=hosts.id),
>> directory='/nfs_c3/export',
>> filename=('%s_backup_%s.ova' % (vm.name, start_time)),
>> wait=True,
>> )
>> #time.sleep(5)
>> jobs_service = connection.system_service().jobs_service()
>> jobs = jobs_service.list(search='')
>> for job in jobs:
>> print(job.id, job.description)
>> #job = jobs_service.job_service(job.id).get()
>> while job.status == types.JobStatus.STARTED:
>> time.sleep(10)
>> job = jobs_service.job_service(job.id).get()
>> print('job-status: %s' % (job.status))
>>
>> connection.close()
>>
>> The line
>> jobs = jobs_service.list(search='')
>> works fine as long as the search pattern is empty.
>>
>> if i try to restrict the results returned like this:
>> jobs = jobs_service.list(search='description=*blxlic954*')
>> i get an error:
>>
>>  bad sql grammar [select * from (select * from job where ( job id in
>> (select distinct job.job id from  job   where  (  ) ))  order by start time
>> asc) as t1 offset (1 -1) limit 2147483647]; nested exception is
>> org.postgresql.util.psqlexception: error: syntax error at or near ")"
>>
>> looks like the 'where' clause is not filled correctly.
>>
>> Am i wrong with my syntax or ist that a bug?
>> Is there another way to get the correct job id/status?
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZVFUE32ORCMN7EBRR7PXKRWAJJV4MAIB/
>>
>
>
> --
>
> Jan Zmeskal
>
> Quality Engineer, RHV Core System
>
> Red Hat <https://www.redhat.com>
> <https://www.redhat.com>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YRKMLOYWOY24WVYJZBI77ZDDOMKVAIPP/


[ovirt-users] Re: Hyperconverged solution

2020-01-24 Thread Jayme
I believe you would have to either combine the drives with raid or lvm so
it’s presented as one device or just create multiple storage domains

On Fri, Jan 24, 2020 at 5:41 AM Benedetto Vassallo <
benedetto.vassa...@unipa.it> wrote:

> Def. Quota Nir Soffer :
>
> > Hyperconverged uses gluster, and gluster uses replication (replica 3 or
> > replica 2 + arbiter) so adding raid below may not be needed.
>
> Yes, I know this, but there is a way from the UI to create the storage
> domain using more then one disk?
> I can't understand this in the guide available at
>
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>
> >
> > You may use the SSDs for lvm cache for the gluster setup.
> >
>
> That would be great!
>
>
> > I would try to ask on Gluster mailing list about this.
>
> Thank you, I'm waiting for your news.
>
> Best Regards
>
>
> --
> --
> Benedetto Vassallo
> Responsabile U.O. Sviluppo e manutenzione dei sistemi
> Sistema Informativo di Ateneo
> Università degli studi di Palermo
>
> Phone: +3909123860056
> Fax:   +3909123860880
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/INQJOX7FY7MDXADMIVWZKPS6D6DZZ5YY/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YH5AMDC4CAHJJXRNHFR5TLHPMBH26QR5/


[ovirt-users] Re: ovirt_vm ansible module -- how to wait for ova export to finish

2020-01-23 Thread Jayme
Jan,

I just ran a quick test with your suggestion and it seems like it is
working as intended.  I need to do more testing with it but it looks like
this may well be a viable solution:

---
- hosts: localhost
  connection: local

  vars:
host: hostX
cluster: default
directory: '/backup/'
ova_ext: ova
vms:
  - ovatest
  - ovatest2

  tasks:
- name: Backup VMs
  include_tasks: backup-test.yml
  loop: "{{ vms }}"


backup-test.yml:

---

- name: "Export VM to OVA"
  ovirt_vm:
auth: "{{ ovirt_auth }}"
name: "{{ item }}"
state: exported
cluster: "{{ cluster }}"
export_ova:
host: "{{ host }}"
filename: "{{ item }}.{{ ova_ext }}"
directory: "{{ directory }}"

- name: "Wait for export to finish"
  wait_for:
path: "/backup/vm.ova"  # will change to using vars here

The backup folder isn't accessible from where I'm running ansible from so I
will need to check it remotely but that should be easy to solve.

On Thu, Jan 23, 2020 at 11:43 AM Jan Zmeskal  wrote:

> Hi Jayme,
>
> let us know how it went. Anyway, if you ever run into hard timeout, don't
> despair. It also happened to me once when working with Ansible oVirt
> modules and I just created an issue on GitHub where I requested this
> timeout to be changed from hard-coded value to a configurable parameter
> with some reasonable default. It was implemented rather quickly.
>
> Jan
>
> On Thu, Jan 23, 2020 at 4:32 PM Jayme  wrote:
>
>> That may work since the file will be tmp until finished being written. I
>> was also just looking at the ovirt event info module
>> https://docs.ansible.com/ansible/latest/modules/ovirt_event_info_module.html#ovirt-event-info-module
>>  --
>> I was thinking that I might be able to watch the event info wait on the
>> event which shows the export was successful i.e. Vm X was exported
>> successfully as a Virtual Appliance to path
>>
>> There is also the event index which could be useful in terms of getting a
>> starting point for the event search.
>>
>> I thought there would be a module or API for the running oVirt task list
>> but so far I haven't been able to find any way to get info on oVirt tasks.
>>
>> I'll see if I can get something working with your suggestion and keep
>> looking at API and ansible modules to see which make sense to use.
>>
>> I'm also worried that timeout issues may occur if I start waiting in some
>> cases an hour or more for very large VM backups to complete before moving
>> on to the next with ansible.
>>
>> Thanks!
>>
>> Jayme
>>
>> On Thu, Jan 23, 2020 at 10:00 AM Jan Zmeskal  wrote:
>>
>>> Hi Jayme,
>>>
>>> here's my idea. I haven't tested it but I believe it should work.
>>> 1. Create a new task file (let's call it export_vm.yaml) and include two
>>> tasks in there:
>>> 1.1. First task uses ovirt_vm module (pretty much what you already have)
>>> to export VM
>>> 1.2. Second task uses wait_for
>>> <https://docs.ansible.com/ansible/latest/modules/wait_for_module.html>
>>> module (specifically its path parameter) to wait until the OVA file in
>>> /backup exists
>>> 2. Loop over those two tasks as explained here
>>> <https://ericsysmin.com/2019/06/20/how-to-loop-blocks-of-code-in-ansible/>
>>> .
>>>
>>> Hope this helps.
>>>
>>> Jan
>>>
>>> On Wed, Jan 22, 2020 at 4:15 PM Jayme  wrote:
>>>
>>>> I wrote a simple task that is using the ovirt_vm module
>>>> https://docs.ansible.com/ansible/latest/modules/ovirt_vm_module.html --
>>>> it essentially loops over a list of vms and exports them to OVA.
>>>>
>>>> The problem I have is the task is deemed changed once it successfully
>>>> submits the export task to oVirt. This means that if I gave it a list of
>>>> 100 Vms I believe it would start an export task on all of them. I want to
>>>> prevent this and have it only export one VM at a time. In order to do this
>>>> I believe I will need to find a way for the task to wait and somehow verify
>>>> that the export was completed before submitting a task for the next VM
>>>> export.
>>>>
>>>> Any ideas?
>>>>
>>>> - name: Export the VM
>>>>   ovirt_vm:
>>>> auth: "{{ ovirt_auth }}"
>>>> name: "{{ item }}"
>>>> state: exported
>>>> cluster: default
>>>> 

[ovirt-users] Re: ovirt_vm ansible module -- how to wait for ova export to finish

2020-01-23 Thread Jayme
That may work since the file will be tmp until finished being written. I
was also just looking at the ovirt event info module
https://docs.ansible.com/ansible/latest/modules/ovirt_event_info_module.html#ovirt-event-info-module
--
I was thinking that I might be able to watch the event info wait on the
event which shows the export was successful i.e. Vm X was exported
successfully as a Virtual Appliance to path

There is also the event index which could be useful in terms of getting a
starting point for the event search.

I thought there would be a module or API for the running oVirt task list
but so far I haven't been able to find any way to get info on oVirt tasks.

I'll see if I can get something working with your suggestion and keep
looking at API and ansible modules to see which make sense to use.

I'm also worried that timeout issues may occur if I start waiting in some
cases an hour or more for very large VM backups to complete before moving
on to the next with ansible.

Thanks!

Jayme

On Thu, Jan 23, 2020 at 10:00 AM Jan Zmeskal  wrote:

> Hi Jayme,
>
> here's my idea. I haven't tested it but I believe it should work.
> 1. Create a new task file (let's call it export_vm.yaml) and include two
> tasks in there:
> 1.1. First task uses ovirt_vm module (pretty much what you already have)
> to export VM
> 1.2. Second task uses wait_for
> <https://docs.ansible.com/ansible/latest/modules/wait_for_module.html>
> module (specifically its path parameter) to wait until the OVA file in
> /backup exists
> 2. Loop over those two tasks as explained here
> <https://ericsysmin.com/2019/06/20/how-to-loop-blocks-of-code-in-ansible/>
> .
>
> Hope this helps.
>
> Jan
>
> On Wed, Jan 22, 2020 at 4:15 PM Jayme  wrote:
>
>> I wrote a simple task that is using the ovirt_vm module
>> https://docs.ansible.com/ansible/latest/modules/ovirt_vm_module.html --
>> it essentially loops over a list of vms and exports them to OVA.
>>
>> The problem I have is the task is deemed changed once it successfully
>> submits the export task to oVirt. This means that if I gave it a list of
>> 100 Vms I believe it would start an export task on all of them. I want to
>> prevent this and have it only export one VM at a time. In order to do this
>> I believe I will need to find a way for the task to wait and somehow verify
>> that the export was completed before submitting a task for the next VM
>> export.
>>
>> Any ideas?
>>
>> - name: Export the VM
>>   ovirt_vm:
>> auth: "{{ ovirt_auth }}"
>> name: "{{ item }}"
>> state: exported
>> cluster: default
>> export_ova:
>> host: Host0
>> filename: "{{ item }}"
>> directory: /backup/
>>   with_items: "{{ vms }}"
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PXYAQ7YEBQCUWCAQCFAFXB3545LNB23X/
>>
>
>
> --
>
> Jan Zmeskal
>
> Quality Engineer, RHV Core System
>
> Red Hat <https://www.redhat.com>
> <https://www.redhat.com>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMT6J3LDZGWTI42PLLX2RLE64YDBERYT/


[ovirt-users] Re: OVA export to NFS share slow

2020-01-23 Thread Jayme
Hello,

I did a test this morning attaching my NFS server as an export domain.  I
shutdown the same 50Gb VM and exported it to the NFS export domain with
oVirt GUI.  I surprisingly had very similar results to OVA exports, it took
just about the same amount of time ~10 minutes, maybe even a tad longer
(although I had more disk activity on the NFS server vs when I was doing
OVA exporting). I would expect exporting as OVA would add some overhead
with the loop device as well as performing a snapshot operation and
whatever else it does (I don't know the inner workings of the scripts
involved).

I'm not sure why your OVA export to direct attached NFS would be 20x slower
than what I'm seeing in my environment.

On Thu, Jan 23, 2020 at 2:49 AM Jürgen Walch  wrote:

> ➢ I have a very similar setup as you and have just very recently started
> testing OVA exports for backup purposes to NFS attached storage.
> ➢ I have a three node HCI on GlusterFS (SSD backed) with 10Gbit and my
> ovirt management network is 10Gbit as well.  My NFS storage server is an 8
> x 8Tb 7200 RPM drives in RAID10 running CentOS 8x with 10Gbit link.
>
> Our setups are indeed similar, the main difference being, that my
> management network including the connection to the NFS server is only
> 1Gbit. Only GlusterFS has 10Gbit here.
>
> ➢ I haven't done specific measurement yet as I just setup the storage
> today but a test export of a 50Gb VM took just about ~10 minutes start to
> finish.
>
> Doing the maths this is ~80MiB/s and 20 times faster than in my setup.
> Lucky you 
> Much less than your 10Gbit link between NFS Server and nodes could
> provide, but maybe close to the limit of the drives in your NFS server.
>
> The interesting thing is, that when setting up an export domain, stopping
> the VM and doing an export to the *same* NFS server, I'm getting write
> speeds as expected.
> Only the OVA export is terribly slow.
>
> The main difference I can see is the use of a loop device when exporting
> to OVA.
> The export to the export domain does something like
>
> /usr/bin/qemu-img convert -p -t none -T none -f raw {source disk
> on GlusterFS} {target disk on NFS server}
>
> whereas the OVA export will do
>
> /usr/bin/qemu-img convert -T none -O qcow2 {source snapshot on
> GlusterFS} /dev/loopX
>
> with /dev/loopX pointing to the NFS OVA target image.
>
> If you have the time and are willing to test, I would be interested in how
> fast your exports to an export domain are
>
> --
>
>  juergen
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FLQVWTKINNSGMIZVTNFIXOE2C4DF4VZ6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/56MISAD2QCUCV5I3DYUEI2WOFXGCU5YE/


[ovirt-users] Re: Gluster storage options

2020-01-23 Thread Jayme
Yes you should install node on separate boot drives and add your additional
drives for gluster. You do not have to do anything with gluster beforehand.
The ovirt installer will prepare the drives and do all the needed gluster
configuration with gdeploy

On Thu, Jan 23, 2020 at 4:32 AM Shareef Jalloq  wrote:

> Hi there,
>
> I'm wanting to build a 3 node Gluster hyperconverged setup but am
> struggling to find documentation and examples of the storage setup.
>
> There seems to be a dead link to an old blog post on the Gluster section
> of the documentation:
> https://www.ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>
> Is the flow to install the oVirt Node image on a boot drive and then add
> disks for Gluster? Or is Gluster setup first with ovirt installed on top?
>
> Thanks.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBUO7APZDQJB2JF3ECBLR2JEUHDWO2IW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHL2H2Y5DYJSBNTJDAMKML2JPJWXFI4J/


[ovirt-users] Re: Disconnecting drive from VM

2020-01-22 Thread Jayme
I suppose it would matter how that disk is presented and used by the
operating system. If it’s the boot drive of course it would cause issues.
If it’s just a data disk maybe not.

On Wed, Jan 22, 2020 at 5:45 PM  wrote:

> Hi!
> For a Virtual Machine that is already shutdown, does anyone know if I can
> detach a disk from the VM, export the remaining parts of the VM, then
> reattach the disk without causing any problems with the software installed
> in the VM?
> Thanks,
> Anthony
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HC7C7LACFQ7WOVPKRSSJZ3ERYC2Q467R/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ECEVZBLEEJQRWGE3Q4F76ZH2KXZN7TYF/


[ovirt-users] ovirt_vm ansible module -- how to wait for ova export to finish

2020-01-22 Thread Jayme
I wrote a simple task that is using the ovirt_vm module
https://docs.ansible.com/ansible/latest/modules/ovirt_vm_module.html -- it
essentially loops over a list of vms and exports them to OVA.

The problem I have is the task is deemed changed once it successfully
submits the export task to oVirt. This means that if I gave it a list of
100 Vms I believe it would start an export task on all of them. I want to
prevent this and have it only export one VM at a time. In order to do this
I believe I will need to find a way for the task to wait and somehow verify
that the export was completed before submitting a task for the next VM
export.

Any ideas?

- name: Export the VM
  ovirt_vm:
auth: "{{ ovirt_auth }}"
name: "{{ item }}"
state: exported
cluster: default
export_ova:
host: Host0
filename: "{{ item }}"
directory: /backup/
  with_items: "{{ vms }}"
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PXYAQ7YEBQCUWCAQCFAFXB3545LNB23X/


[ovirt-users] Re: OVA export to NFS share slow

2020-01-22 Thread Jayme
Hello,

I have a very similar setup as you and have just very recently started
testing OVA exports for backup purposes to NFS attached storage.

I have a three node HCI on GlusterFS (SSD backed) with 10Gbit and my ovirt
management network is 10Gbit as well.  My NFS storage server is an 8 x 8Tb
7200 RPM drives in RAID10 running CentOS 8x with 10Gbit link.

I haven't done specific measurement yet as I just setup the storage today
but a test export of a 50Gb VM took just about ~10 minutes start to finish.

I will hopefully be doing some further testing over the next few weeks and
am interested to hear how you get along as well. If it's helpful I'd be
happy to run any testing you might be interested in on my equipment to see
how it compares.

- Jayme

On Wed, Jan 22, 2020 at 10:16 AM Jürgen Walch  wrote:

> Hello,
>
> we are using oVirt on a production system with a three node
> hyperconverged-cluster based on GlusterFS with a 10Gbit storage backbone
> network.
> Everything runs smooth except OVA exports.
>
> Each node has a NFS mount mounted on
>
> /data/ova
>
> with custom mount option "soft".
> The NFS server used is a plain vanilla CentOS7 host with /etc/exports
> containing a line
>
> /data/ova *(rw,all_squash,anonuid=36,anongid=36)
>
> When exporting VM's as OVA using the engine web gui, the export is
> terribly slow (~4MiB/s), it succeeds for small disks (up to 20GB),
> exporting larger disks fails with a timeout.
> The network link between oVirt-nodes and NFS server is 1Gbit.
>
> I have done a little testing and looked at the code in
> /usr/share/ovirt-engine/playbooks/roles/ovirt-ova-pack/files/pack_ova.py.
> It seems, the export is done by setting up a loop device /dev/loopX on the
> exporting node linked to a freshly generated sparse file
> /data/ova/{vmname}.tmp on the NFS share and then exporting the disk using
> qemu-img with target /dev/loopX.
> Using iotop on the node doing the export I can see write rates ranging
> from 2-5 Mib/s on the /dev/loopX device.
>
> When copying to the NFS share /data/ova using dd or qemu-img *directly*
> (that is using /data/ova/test.img instead of the loop device as target) I
> am getting write rates of ~100MiB/s which is the expected performance of
> the NFS servers underlying harddisk-system and the network connection. It
> seems that the loop device is the bottleneck.
>
> So far I have been playing with NFS mount options and the options passed
> to qemu-img in
> /usr/share/ovirt-engine/playbooks/roles/ovirt-ova-pack/files/pack_ova.py
> without any success.
>
> Any ideas or anyone with similar problems ? 
>
> --
>
>  juergen walch
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HZIAYHUKQ5XHGPM3PC4O5GGKHCB52XKU/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SVCUY3KAYUGS5AUN5IHJPJDY7Z2RZPTD/


[ovirt-users] possible to export a running VM to OVA?

2020-01-21 Thread Jayme
I'm looking at using a script similar to
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/export_vm_as_ova.py
to
export VMs as OVA for backup purposes.  I tested it out and it seems that
it does create a snapshot and allows me to export an OVA of a running VM.
I read in guides that the VM should be shutdown before exporting to OVA but
I'm not sure if that info is still relevant.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EM7GPIROW7NBR4BTXMFQPN4J6ZD5ZMVN/


[ovirt-users] Re: Ovirt backup

2020-01-20 Thread Jayme
Looking at oVirt ansible roles, I wonder if it would be easy to implement
VM backups using the ovirt-snapshot-module to create a VM snapshot and
download it.

On Mon, Jan 20, 2020 at 5:04 AM Nathanaël Blanchet  wrote:

>
> Le 19/01/2020 à 18:38, Jayme a écrit :
>
> The biggest problem with these tools is that they are very inefficient.
> To work they snapshot the VM then clone the snapshot into a new VM, back it
> up then delete.  This takes a lot of space and time.
>
> vProtect and some other enterprise backup software snapshot the VM and
> stream the snapshot from the API without needing to clone or using a proxy
> VM.
>
> At the same time, this workflow is the one recommended by the ovirt team (
> https://www.ovirt.org/develop/release-management/features/storage/backup-restore-api-integration.html).
> If it is no effecient enough, ovirt team should update the process and
> advice users of a better practice for vm backup in current/future ovirt
> 4.3/4.4.
>
> The new version of vProtect even bypasses the API (because it's slow) and
> now supports streaming over SSH directly from the host.  This is the ideal
> solution for oVirt VM backups imo, but I don't know if any free tool exists
> that can offer the same functionality.
>
> On Sun, Jan 19, 2020 at 1:03 PM Torsten Stolpmann <
> torsten.stolpm...@verit.de> wrote:
>
>> I am still using https://github.com/wefixit-AT/oVirtBackup but since
>> support for the v3 API will be removed with oVirt 4.4 it will stop
>> working with this release. For this reason I can no longer recommend it
>> but it served me well the past few years.
>>
>> There is also https://github.com/jb-alvarado/ovirt-vm-backup which has
>> similar functionality but I have yet no first-hand experience with this.
>>
>> Hope this helps.
>>
>> Torsten
>>
>> On 19.01.2020 10:05, Nazan CENGİZ wrote:
>> > Hi all,
>> >
>> >
>> > I want to back up Ovirt for free. Is there a script, project or tool
>> > that you can recommend for this?
>> >
>> >
>> > Is there a project that you can test, both backup and restore process
>> > can work properly?
>> >
>> >
>> > Best Regards,
>> >
>> > Nazan.
>> >
>> >
>> >
>> > <http://www.havelsan.com.tr>
>> > **Nazan CENGİZ
>> > AR-GE MÜHENDİSİ
>> > Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE
>> >   +90 312 219 57 87   +90 312 219 57 97
>> >
>> > YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz
>> > Koşul ve Şartlar dokümanına tabidir.
>> > <http://havelsan.com.tr/tr/news/e-posta-yasal-uyari>
>> > LEGAL NOTICE: This e-mail is subject to the Terms and Conditions
>> > document which can be accessed with this link.
>> > <http://havelsan.com.tr/tr/news/e-posta-yasal-uyari>
>> >   Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please
>> > consider the environment before printing this email
>> >
>> >
>> >
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G56O76VB5WO3MV2URL4OH3KNZMQRSKU4/
>> >
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2LGGH7UEC3RBNELT57YF7255FYORSMGZ/
>>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/H6JDPEBGWJY3KDRIKV2MJSJB64ZPZ3FS/
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> SIRE
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 

[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-20 Thread Jayme
I would try running a full heal first and give it some time to see if it
clears up.  I.e. gluster volume heal  full

If that doesn't work, you could try stat on every file to trigger healing
doing something like this: find /fuse-mountpoint -iname '*' -exec stat {} \;

On Mon, Jan 20, 2020 at 12:16 PM Stefan Wolf  wrote:

> Hello to all,
>
> I ve a problem with gluster
>
> [root@kvm10 ~]# gluster volume heal data info summary
> Brick kvm10:/gluster_bricks/data
> Status: Connected
> Total Number of entries: 868
> Number of entries in heal pending: 868
> Number of entries in split-brain: 0
> Number of entries possibly healing: 0
>
> Brick kvm320.durchhalten.intern:/gluster_bricks/data
> Status: Connected
> Total Number of entries: 1
> Number of entries in heal pending: 1
> Number of entries in split-brain: 0
> Number of entries possibly healing: 0
>
> Brick kvm360.durchhalten.intern:/gluster_bricks/data
> Status: Connected
> Total Number of entries: 867
> Number of entries in heal pending: 867
> Number of entries in split-brain: 0
> Number of entries possibly healing: 0
>
> Brick kvm380.durchhalten.intern:/gluster_bricks/data
> Status: Connected
> Total Number of entries: 868
> Number of entries in heal pending: 868
> Number of entries in split-brain: 0
> Number of entries possibly healing: 0
>
> [root@kvm10 ~]# gluster volume heal data info split-brain
> Brick kvm10:/gluster_bricks/data
> Status: Connected
> Number of entries in split-brain: 0
>
> Brick kvm320.durchhalten.intern:/gluster_bricks/data
> Status: Connected
> Number of entries in split-brain: 0
>
> Brick kvm360.durchhalten.intern:/gluster_bricks/data
> Status: Connected
> Number of entries in split-brain: 0
>
> Brick kvm380.durchhalten.intern:/gluster_bricks/data
> Status: Connected
> Number of entries in split-brain: 0
>
> As I understand there is no split-brain but 868 files ar in state heal
> pending.
> I ve restarted every node.
>
> I ve also tried:
> [root@kvm10 ~]# gluster volume heal data full
> Launching heal operation to perform full self heal on volume data has been
> successful
> Use heal info commands to check status.
>
> but even after a week there is no really change ( I started with 912
> Number of entries in heal pending)
>
> can somebody tell what exactly is the problem and how can I solve it.
>
> thank you very much
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PN63LC3OBQOM7IQY763ZS5V6VZDUFPNP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UXHQHUV7HA3AQI7VZFF5W22DB2STT5VJ/


[ovirt-users] Re: Ovirt backup

2020-01-19 Thread Jayme
The biggest problem with these tools is that they are very inefficient.  To
work they snapshot the VM then clone the snapshot into a new VM, back it up
then delete.  This takes a lot of space and time.

vProtect and some other enterprise backup software snapshot the VM and
stream the snapshot from the API without needing to clone or using a proxy
VM.  The new version of vProtect even bypasses the API (because it's slow)
and now supports streaming over SSH directly from the host.  This is the
ideal solution for oVirt VM backups imo, but I don't know if any free tool
exists that can offer the same functionality.

On Sun, Jan 19, 2020 at 1:03 PM Torsten Stolpmann <
torsten.stolpm...@verit.de> wrote:

> I am still using https://github.com/wefixit-AT/oVirtBackup but since
> support for the v3 API will be removed with oVirt 4.4 it will stop
> working with this release. For this reason I can no longer recommend it
> but it served me well the past few years.
>
> There is also https://github.com/jb-alvarado/ovirt-vm-backup which has
> similar functionality but I have yet no first-hand experience with this.
>
> Hope this helps.
>
> Torsten
>
> On 19.01.2020 10:05, Nazan CENGİZ wrote:
> > Hi all,
> >
> >
> > I want to back up Ovirt for free. Is there a script, project or tool
> > that you can recommend for this?
> >
> >
> > Is there a project that you can test, both backup and restore process
> > can work properly?
> >
> >
> > Best Regards,
> >
> > Nazan.
> >
> >
> >
> > 
> > **Nazan CENGİZ
> > AR-GE MÜHENDİSİ
> > Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE
> >   +90 312 219 57 87   +90 312 219 57 97
> >
> > YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz
> > Koşul ve Şartlar dokümanına tabidir.
> > 
> > LEGAL NOTICE: This e-mail is subject to the Terms and Conditions
> > document which can be accessed with this link.
> > 
> >   Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please
> > consider the environment before printing this email
> >
> >
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G56O76VB5WO3MV2URL4OH3KNZMQRSKU4/
> >
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2LGGH7UEC3RBNELT57YF7255FYORSMGZ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H6JDPEBGWJY3KDRIKV2MJSJB64ZPZ3FS/


[ovirt-users] Re: Ovirt backup

2020-01-19 Thread Jayme
Good backup products for ovirt seem hard to come by. If you want to backup
10 or less vms I’d recommend vprotect as it’s free. It works great but it’s
costly for a license above 10 vms.

On Sun, Jan 19, 2020 at 5:08 AM Nazan CENGİZ 
wrote:

> Hi all,
>
>
> I want to back up Ovirt for free. Is there a script, project or tool that
> you can recommend for this?
>
>
> Is there a project that you can test, both backup and restore process can
> work properly?
>
>
> Best Regards,
>
> Nazan.
>
>
> 
> Nazan CENGİZ
> AR-GE MÜHENDİSİ
> Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE
> 
> +90 312 219 57 87 +90 312 219 57 97
> YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz
> Koşul ve Şartlar dokümanına tabidir.
> 
> LEGAL NOTICE: This e-mail is subject to the Terms and Conditions
> document which can be accessed with this link.
> 
> Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider
> the environment before printing this email
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G56O76VB5WO3MV2URL4OH3KNZMQRSKU4/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7T4CRXPR4EJ2CNFKF6E23YZB2SFZSGS/


[ovirt-users] Re: SPM contending loop

2020-01-14 Thread Jayme
I seem to have been able to break the loop by manually restarting vdsmd. As
soon as I restarted on one host it was able to be selected as spm.

On Tue, Jan 14, 2020 at 6:21 AM Jorick Astrego  wrote:

> Hi Jayme,
>
> The only thing I can find related to the vdsm errors you post is
> https://bugzilla.redhat.com/show_bug.cgi?id=1493184
>
> Nir states it's just a logging issue so that doesn't help to much.
>
> What version are you running?
>
> Anything in the gluster logs on the node?
>
>
>  Nir Soffer 2018-06-25 12:27:16 UTC
>
> These exceptions:
>
> OSError: [Errno 2] No such file or directory
> ...
> TaskMetaDataLoadError: Can't load Task Metadata: 
> ('/rhev/data-center/e6c5d8a2-5386-11e8-8885-004655214801/mastersd/master/tasks/60bf8af9-d4d3-4753-a40d-2a8d028d3d3c/60bf8af9-d4d3-4753-a40d-2a8d028d3d3c.recover.0',)
>
> Mean that a dumped task could not not loaded because there was no such file 
> in the
> task directory.
>
> This error does not effect the SPM start process, the code is trying to load 
> dumped
> tasks and ignore the result of the load.
>
> So this looks like log issue, moving severity to low since I don't see any 
> real
> issue.
>
> Regarding the exceptions, we have several issues:
>
> 1. Logging several exceptions for the same problem - we should log the same 
> issue
>exactly once. This happens because the code using the anti-pattern of 
> logging
>an exception and raising new one.
>
> 2. I'm not sure why we log a traceback for expected error like a missing 
> dumped
>task file. This should be logged without a traceback.
>
> 3. I'm not sure why missing dumped task is an error, since the SPM code ignore
>it. This should be probably a warning.
>
>
> Regards,
>
> Jorick Astrego
>
> Netbulae
> On 1/14/20 3:27 AM, Jayme wrote:
>
> My cluster appears to be experiencing an SPM problem. I recently placed
> each host in maintenance to move the ovirt management network to another
> interface.  All was successful and all VMs are currently running.  However,
> I'm not facing an SPM contending loop with data center going in and out of
> responsive status.
>
> I have a 3 server HCI setup and all volumes are active and healed, there
> are no unsynced entries or split brains.
>
> Does anyone know how I could diagnose the SPM issue?
>
> engine.log:
>
> 2020-01-13 22:24:54,777-04 INFO
>  [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]
> (DefaultQuartzScheduler2) [213adf4f] START,
> GlusterTasksListVDSCommand(HostName = Orchard0,
> VdsIdVDSCommandParametersBase:{hostId='771c67eb-56e6-4736-8c67-668502d4ecf5'}),
> log id: 349f80a9
> 2020-01-13 22:24:55,231-04 INFO
>  [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]
> (DefaultQuartzScheduler2) [213adf4f] FINISH, GlusterTasksListVDSCommand,
> return: [], log id: 349f80a9
> 2020-01-13 22:24:58,245-04 INFO
>  [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
> (DefaultQuartzScheduler3) [4f66c75b] START,
> GlusterServersListVDSCommand(HostName = Orchard0,
> VdsIdVDSCommandParametersBase:{hostId='771c67eb-56e6-4736-8c67-668502d4ecf5'}),
> log id: 7b04f110
> 2020-01-13 22:24:58,887-04 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-72) [] Command
> 'org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand'
> return value '
> TaskStatusListReturn:{status='Status [code=654, message=Not SPM]'}
> '
> 2020-01-13 22:24:58,888-04 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-72) [] HostName = Orchard1
> 2020-01-13 22:24:58,888-04 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-72) [] Command
> 'HSMGetAllTasksStatusesVDSCommand(HostName = Orchard1,
> VdsIdVDSCommandParametersBase:{hostId='fb1e62d5-1dc1-4ccc-8b2b-cf48f7077d0d'})'
> execution failed: IRSGenericException: IRSErrorException:
> IRSNonOperationalException: Not SPM
> 2020-01-13 22:24:59,034-04 INFO
>  [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
> (DefaultQuartzScheduler3) [4f66c75b] FINISH, GlusterServersListVDSCommand,
> return: [10.12.0.220/24:CONNECTED, orchard1.grove.silverorange.com:CONNECTED,
> orchard2.grove.silverorange.com:DISCONNECTED], log id: 7b04f110
> 2020-01-13 22:24:59,049-04 INFO
>  [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
> (DefaultQuartzScheduler3) [4f66c75b] START,
> GlusterServersListVDSCommand(HostName = Orchard2,
> VdsIdVDSCommandParametersBase:{hostId

  1   2   3   4   >