[ovirt-users] Re: oVirt/Hyperconverged issue

2021-09-28 Thread Jayme
With 4 servers only three would be used for hyperconverged storage, the 4th
would be added as a compute node which would not participate in GlusterFS
storage.

To expand hyper-converged to more than 3 servers you have to add hosts in
multiples of 3

On Tue, Sep 28, 2021 at 9:49 AM  wrote:

> Kindly share also for the latest ovirt OS 4.4 is it possible the
> Hyperconverged to scaling up till 4 nodes? or only can use 3 nodes?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SG725S4G57UAVBTBV5QLBO7V2AOF2MCO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QC6QKVLXXHJ7SZZ4ZGS7HCDNRZ5R7NWY/


[ovirt-users] Re: I'm in trouble....

2021-09-18 Thread Jayme
Once you add the nfs storage domain you should be able to go into the
storage domain in ovirt admin and there should be vms tab somewhere. You
should have a list or your vms. Click on each then hit import to import the
config into the new engine.

On Sat, Sep 18, 2021 at 6:28 PM Wesley Stewart  wrote:

> I appreciate the quick replies.  Yes very basic.  Single host/node with
> NFS and local storage.
>
> I will try to reinstall the host and import.  I haven't had to do this yet!
>
> On Sat, Sep 18, 2021, 5:26 PM Jayme  wrote:
>
>> It sounds like your setup is fairly basic. For quickness I’d personally
>> just reinstall engine and import storage domains from nfs sever then import
>> the vms. you can restore the hosted engine if you have a backup of it made
>> by the proper hosted engine backup tool.
>>
>> On Sat, Sep 18, 2021 at 6:21 PM Wesley Stewart 
>> wrote:
>>
>>> I believe I should have the dump from the initial engine-setup.  Can
>>> that be used?
>>>
>>> Also, I'm only running about 8 vms and need to start over for 4.4.
>>> Would it be easier to just reinstall and import the disks?
>>>
>>>
>>>
>>> On Sat, Sep 18, 2021, 5:11 PM Jayme  wrote:
>>>
>>>> Do you have a backup of the hosted engine that you can restore? If your
>>>> vms are on nfs mounts you should be able to readd the storage domain and
>>>> import the vms
>>>>
>>>> On Sat, Sep 18, 2021 at 4:26 PM Wesley Stewart 
>>>> wrote:
>>>>
>>>>> Luckily this is for a home lab and nothing critical is lost.  However
>>>>> I was trying to change the subnet my host was on.  And I found out that 
>>>>> you
>>>>> cant do that.  So I reverted back but my hosted engine wasnt happy about 
>>>>> my
>>>>> NFS mounts.
>>>>>
>>>>> so I did a engine-cleanup and an engine-setup.  Went through the
>>>>> prompts and thought nothing of it.  I logged back into ovirt, and
>>>>> everything looks like it was a brand new install.  No VMs or anything.  I
>>>>> believe I have a fairly recent backup somewhere... trying to track that
>>>>> down, but it's not looking good.
>>>>>
>>>>> Currently running 4.3.10.  I have been planning out my 4.4 upgrade
>>>>> anyways.  I would prefer to not start from scratch... but if I do it is 
>>>>> not
>>>>> the end of the world, will just take a lot more time.
>>>>>
>>>>> Looking for some guidance! Thank you.
>>>>> ___
>>>>> Users mailing list -- users@ovirt.org
>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BKJLN25CGI2U4LIO7HOEJQJVRE7CA3JN/
>>>>>
>>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FA2WPYNCHRZ26KJNTXAAFAG7QLQTNLXB/


[ovirt-users] Re: I'm in trouble....

2021-09-18 Thread Jayme
It sounds like your setup is fairly basic. For quickness I’d personally
just reinstall engine and import storage domains from nfs sever then import
the vms. you can restore the hosted engine if you have a backup of it made
by the proper hosted engine backup tool.

On Sat, Sep 18, 2021 at 6:21 PM Wesley Stewart  wrote:

> I believe I should have the dump from the initial engine-setup.  Can that
> be used?
>
> Also, I'm only running about 8 vms and need to start over for 4.4.  Would
> it be easier to just reinstall and import the disks?
>
>
>
> On Sat, Sep 18, 2021, 5:11 PM Jayme  wrote:
>
>> Do you have a backup of the hosted engine that you can restore? If your
>> vms are on nfs mounts you should be able to readd the storage domain and
>> import the vms
>>
>> On Sat, Sep 18, 2021 at 4:26 PM Wesley Stewart 
>> wrote:
>>
>>> Luckily this is for a home lab and nothing critical is lost.  However I
>>> was trying to change the subnet my host was on.  And I found out that you
>>> cant do that.  So I reverted back but my hosted engine wasnt happy about my
>>> NFS mounts.
>>>
>>> so I did a engine-cleanup and an engine-setup.  Went through the prompts
>>> and thought nothing of it.  I logged back into ovirt, and everything looks
>>> like it was a brand new install.  No VMs or anything.  I believe I have a
>>> fairly recent backup somewhere... trying to track that down, but it's not
>>> looking good.
>>>
>>> Currently running 4.3.10.  I have been planning out my 4.4 upgrade
>>> anyways.  I would prefer to not start from scratch... but if I do it is not
>>> the end of the world, will just take a lot more time.
>>>
>>> Looking for some guidance! Thank you.
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BKJLN25CGI2U4LIO7HOEJQJVRE7CA3JN/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HQ3TLGTOKEURVI4JVGFRS3W27VAGXE6F/


[ovirt-users] Re: I'm in trouble....

2021-09-18 Thread Jayme
Do you have a backup of the hosted engine that you can restore? If your vms
are on nfs mounts you should be able to readd the storage domain and import
the vms

On Sat, Sep 18, 2021 at 4:26 PM Wesley Stewart  wrote:

> Luckily this is for a home lab and nothing critical is lost.  However I
> was trying to change the subnet my host was on.  And I found out that you
> cant do that.  So I reverted back but my hosted engine wasnt happy about my
> NFS mounts.
>
> so I did a engine-cleanup and an engine-setup.  Went through the prompts
> and thought nothing of it.  I logged back into ovirt, and everything looks
> like it was a brand new install.  No VMs or anything.  I believe I have a
> fairly recent backup somewhere... trying to track that down, but it's not
> looking good.
>
> Currently running 4.3.10.  I have been planning out my 4.4 upgrade
> anyways.  I would prefer to not start from scratch... but if I do it is not
> the end of the world, will just take a lot more time.
>
> Looking for some guidance! Thank you.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BKJLN25CGI2U4LIO7HOEJQJVRE7CA3JN/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GVSIEAOE7L63PLFQO5J6HGKLW5NM7OX6/


[ovirt-users] Re: REST API import VM from Export Domain

2021-09-15 Thread Jayme
Shouldn’t that be admin@internal or was that a typo?

On Wed, Sep 15, 2021 at 4:40 AM  wrote:

> i've put all in a rest client to check the syntax and how the request
> looks, now i got this response:
>
> access_denied: Cannot authenticate user 'admin@intern': No valid profile
> found in credentials..
>
> :(
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ON3HUSXWNAG633GYWLGZY4BYRDYTI3GP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGNGCV2CUB6PN2CVGU22LYXGDBVBZZJX/


[ovirt-users] Re: GlusterFS Monitoring/Alerting

2021-09-07 Thread Jayme
I use the nagios check_rhv plugin, it has support for monitoring GlusterFS
as well: https://github.com/rk-it-at/check_rhv

On Tue, Sep 7, 2021 at 8:39 AM Jiří Sléžka  wrote:

> Hi,
>
> On 9/7/21 1:05 PM, si...@justconnect.ie wrote:
> > Hi All,
> >
> > Does anyone have recommendations for GlusterFS monitoring/alerting
> software and or plugins.
>
> I am using Zabbix and this simple plugin
>
> https://github.com/Lelik13a/Zabbix-GluserFS
>
> there are probably more sophisticated solutions but this one serves me well
>
> Cheers,
>
> Jiri
>
> >
> > Kind regards
> >
> > Simon...
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/M6AKHWOD7GLBHJRSCWNZRMM7OOXMKFOY/
> >
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VX5OS3D2LVO5NCKL2UQBEQ523IZVM7BP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QZNS4O3ZLYGP5LDIYEUKNRDI22CY2VHI/


[ovirt-users] Re: How many servers do I need to run oVirt?

2021-07-31 Thread Jayme
You could use a single sever with vms on local storage. Or connected to
remote storage such as nfs.

There are drawbacks of course. You could not keep vms running if host is
down or for upgrades etc.

For any kind of high availability you’d want at least two severs with
remote storage, but then your storage could potentially be a single point
or failure if it’s not redundant.

Ideally the best smaller scale setup for ovirt is using three severs for
hyper converged infrastructure. Storage is spread across the three hosts
using glusterfs and allows for one host of the three to be taken down at
any given time without while keeping vms running.



On Sat, Jul 31, 2021 at 4:12 AM rp.neuli--- via Users 
wrote:

> Hello,
>
> sorry for a very basic question. But I searched the net for the basic
> answer and did not get it.
>
> How many servers do I need to run oVirt software? I understand I can
> access management GUI from anywhere on subnet but naming and they it is
> explained is confusing to me.
>
> I am planning to buy a bigger server (just one to use as compute host)
> with dual cpu and about 64GB RAM. I am thinking to run my webservers as VM
> on this proposed machine.
>
> I thought it was a level 1 hypervisor (then why does it say some places
> that I need CentOS or RH on my server?)
>
> I also have a FreeNAS server, I was thinking if it was possible to keep my
> VMs stored on there and run/load/execute them on this so-called-compute
> server.
>
> Will I have to build and keep my VMs (when powered down) on oVirt machine
> only?
>
> finally what will I loose in comparison to free vSphere, if I select oVirt?
>
> Thank you.
> Rajeev
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/H6LFFC2RA6EYBXH3TA4YE5IMC4FA22LA/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OEKJCTGKEEQXQZ7XXYACADK74JRXNPUM/


[ovirt-users] Re: Reconfigure Gluster from Replica 3 to Arbiter 1/Replica 2

2021-07-10 Thread Jayme
Just a thought but depending on resources you might be able to use your 4th
server as nfs storage and live migrate vm disks to it and off of your
gluster volumes. I’ve done this in the past when doing major maintenance on
gluster volumes to err on the side of caution.

On Sat, Jul 10, 2021 at 7:22 AM David White via Users 
wrote:

> Hmm right as I said that, I just had a thought.
> I DO have a "backup" server in place (that I haven't even started using
> yet), that currently has some empty hard drive bays.
>
> It would take some extra work, but I could use that 4th backup server as a
> temporary staging ground to begin building the new Gluster configuration.
> Once I have that server + 2 of my production servers rebuilt properly, I
> could then simply remove and replace this "backup" server with my 3rd
> server in the cluster.
>
> So this effectively means that I have 2 servers that I can take down
> completely at a single time to rebuild gluster, instead of just 1. I think
> that simplifies things.
>
> Sent with ProtonMail  Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
>
> On Saturday, July 10th, 2021 at 6:14 AM, David White <
> dmwhite...@protonmail.com> wrote:
>
> Thank you. And yes, I agree, this needs to occur in a maintenance window
> and be done very carefully. :)
>
> My only problem with this method is that I need to *replace* disks in the
> two servers.
> I don't have any empty hard drive bays, so will effectively need to put a
> host into maintenance mode, remove the drives, and put new drives in.
>
> I will NOT be touching the OS drives, however, as those are on their own
> separate RAID array.
>
> So, essentially, it will need to look something like this:
>
>- Put the cluster into global maintenance mode
>- Put 1 host into full maintenance mode / deactivate it
>- Stop gluster
>- Remove the storage
>- Add the new storage & reconfigure
>- Start gluster
>- Re-add the host to the cluster
>
> Adding the new storage & reconfiguring is the head scratcher for me, given
> that I don't have room for the old hard drives + new hard drives at the
> same time.
>
> Sent with ProtonMail  Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Saturday, July 10th, 2021 at 5:55 AM, Strahil Nikolov <
> hunter86...@yahoo.com> wrote:
>
> Hi David,
>
>
> any storage operation can cause unexpected situations, so always plan your
> activites for low traffic hours and test them on your test environment in
> advance.
>
> I think it's easier if you (command line):
>
> -verify no heals are pending. Not a single one.
> - set the host to maintenance over ovirt
> - remove the third node from gluster volumes (remove-brick replica 2)
> - umount the bricks on the third node
> - recreate a smaller LV with '-i maxpct=90 size=512' and mount it with the
> same options like the rest of the nodes. Usually I use
> 'noatime,inode64,context=system_u:object_r:glusterd_brick_t:s0'
> - add this new brick (add-brick replica 3 arbiter 1) to the volume
> - wait for the heals to finish
>
> Then repeat again for each volume.
>
>
> Adding the new disks should be done later.
>
>
> Best Regards,
> Strahil Nikolov
>
> On Sat, Jul 10, 2021 at 3:15, David White via Users
>  wrote:
> My current hyperconverged environment is replicating data across all 3
> servers.
> I'm running critically low on disk space, and need to add space.
>
> To that end, I've ordered 8x 800GB ssd drives, and plan to put 4 drives in
> 1 server, and 4 drives in the other.
>
> What's my best option for reconfiguring the hyperconverged cluster, to
> change gluster storage away from Replica 3 to a Replica 2 / Arbiter model?
> I'd really prefer not to have to reinstall things from scratch, but I'll
> do that if I have to.
>
> My most important requirement is that I cannot have any downtime for my
> VMs (so I can only reconfigure 1 host at a time).
>
> Sent with ProtonMail  Secure Email.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHEAXRUNA2RDJRYE74AOHND2QKLM3TAU/
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WWETWL7N7FT5UCYO4N3OXUY3YQUUVFNJ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 

[ovirt-users] Re: glusterfs health-check failed, (brick) going down

2021-07-08 Thread Jayme
I have observed this behaviour recently and in the past on 4.3 and 4.4, and
in my case it’s almost always following an ovirt upgrade. After upgrade
(especially upgrades involving glusterfs) I’d have bricks randomly go down
like your describing for about a week or so after upgrade and I’d have to
manually start them. At some point it just corrects itself and is stable
again. I really have no idea why it occurs and what’s happening that
eventually stops it from happening.

On Wed, Jul 7, 2021 at 4:10 PM Jiří Sléžka  wrote:

> Hello,
>
> I have 3 node HCI cluster with oVirt 4.4.6 and CentOS8.
>
> For time to time (I belive) random brick on random host goes down
> because health-check. It looks like
>
> [root@ovirt-hci02 ~]# grep "posix_health_check"
> /var/log/glusterfs/bricks/*
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
> 07:13:37.408184] M [MSGID: 113075]
> [posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
> health-check failed, going down
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
> 07:13:37.408407] M [MSGID: 113075]
> [posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still
> alive! -> SIGTERM
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
> 16:11:14.518971] M [MSGID: 113075]
> [posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
> health-check failed, going down
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
> 16:11:14.519200] M [MSGID: 113075]
> [posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still
> alive! -> SIGTERM
>
> on other host
>
> [root@ovirt-hci01 ~]# grep "posix_health_check"
> /var/log/glusterfs/bricks/*
> /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05
> 13:15:51.983327] M [MSGID: 113075]
> [posix-helpers.c:2214:posix_health_check_thread_proc] 0-engine-posix:
> health-check failed, going down
> /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05
> 13:15:51.983728] M [MSGID: 113075]
> [posix-helpers.c:2232:posix_health_check_thread_proc] 0-engine-posix:
> still alive! -> SIGTERM
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05
> 01:53:35.769129] M [MSGID: 113075]
> [posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
> health-check failed, going down
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05
> 01:53:35.769819] M [MSGID: 113075]
> [posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still
> alive! -> SIGTERM
>
> I cannot link these errors to any storage/fs issue (in dmesg or
> /var/log/messages), brick devices looks healthy (smartd).
>
> I can force start brick with
>
> gluster volume start vms|engine force
>
> and after some healing all works fine for few days
>
> Did anybody observe this behavior?
>
> vms volume has this structure (two bricks per host, each is separate
> JBOD ssd disk), engine volume has one brick on each host...
>
> gluster volume info vms
>
> Volume Name: vms
> Type: Distributed-Replicate
> Volume ID: 52032ec6-99d4-4210-8fb8-ffbd7a1e0bf7
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 3 = 6
> Transport-type: tcp
> Bricks:
> Brick1: 10.0.4.11:/gluster_bricks/vms/vms
> Brick2: 10.0.4.13:/gluster_bricks/vms/vms
> Brick3: 10.0.4.12:/gluster_bricks/vms/vms
> Brick4: 10.0.4.11:/gluster_bricks/vms2/vms2
> Brick5: 10.0.4.13:/gluster_bricks/vms2/vms2
> Brick6: 10.0.4.12:/gluster_bricks/vms2/vms2
> Options Reconfigured:
> cluster.granular-entry-heal: enable
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> user.cifs: off
> network.ping-timeout: 30
> network.remote-dio: off
> performance.strict-o-direct: on
> performance.low-prio-threads: 32
> features.shard: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> transport.address-family: inet
> storage.fips-mode-rchecksum: on
> nfs.disable: on
> performance.client-io-threads: off
>
> Cheers,
>
> Jiri
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BPXG53NG34QKCABYJ35UYIWPNNWTKXW4/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BZRONK53OGWSOPUSGQ76GIXUM7J6HHMJ/


[ovirt-users] Re: Node 4.4.6 and Gluster deployment issues

2021-06-30 Thread Jayme
Check if there’s another lvm.conf file in the dir like lvm.conf.rpmsave and
swap them out. I recall having to do something similar to solve a
deployment issue much like yours

On Wed, Jun 30, 2021 at 6:39 PM  wrote:

> Been beating my head against this for a while, but I'm having issues
> deploying a new 4.4.6 node hyperconverged cluster.  It's a homelab/dev
> environment, so it's on pretty outdated hardware, SuperMicro X8DTT based
> systems.  No on board raid controller, so that should help at least.
> Storage drives are a software raid 0 of two 1T SSDs at /dev/md/ovstore and
> disk traffic is on a separate network from management traffic; ssh-copy-id
> was run for both interfaces.  It looks like it fails when trying to create
> the volume group on the VDO but I cannot figure out why it's excluded.
> /etc/lvm/lvm.conf is (should be) default from the install, raid is
> formatted with no partitions, and I've done a wipefs on the raid.  I've
> been able to complete the same install on these devices with Node 4.3.9,
> but not 4.4.6.  Logs are available at https://pastebin.com/yBzUpe3c
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3BVNTQI62BD2DEJ2XFFB6WGDY64IH2E4/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AQ4PVAJKX2BLC4SYHB2MM3ESTSPNEOTJ/


[ovirt-users] Re: Mount Export Domain only temporary

2021-06-24 Thread Jayme
A while ago I wrote an ansible playbook to backup ovirt vms via ova export
to storage attached to one of the hosts (nfs in my case). You can check it
out here:
https://github.com/silverorange/ovirt_ansible_backup

I’ve been using this for a while and it has been working well for me on
ovirt 4.3 and 4.4



On Thu, Jun 24, 2021 at 4:52 PM Strahil Nikolov via Users 
wrote:

> I think that you can use the API to do backup to the "local" NFS.
>
> Most probably you can still snapshot and copy the disks , but this will
> require a lot of effort to identify the read only disks and copy them.
>
> Best Regards,
> Strahil Nikolov
>
> On Thu, Jun 24, 2021 at 10:11, Jonathan Baecker
>  wrote:
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBB3FRVK6W37AJLFFMRANZCQZOZFJ66B/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CGTTVQYZMA4ET3YH6II7KLGT64QOUJHK/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2AV2ZWUS7CO2PWUJ7ODE3PH247E7VXMR/


[ovirt-users] Re: The self-hosted engine upgraded to 4.4.6 was not Stream 8.

2021-05-31 Thread Jayme
I'm not sure if the hosted engine is on stream yet. I'm also on 4.4.6 and
while my nodes are CentOS 8 stream my hosted engine is also still 8.3

On Mon, May 31, 2021 at 3:45 AM mail--- via Users  wrote:

> I upgraded. The upgrade seems to have been successful.
> However, the distribution OS of the self-hosted engine did not change.
>
> # cat /etc/system-release
> CentOS Linux release 8.3.2011
>
> Do I need to manually change my self-hosted engine distribution to Stream
> 8?
>
> I saw this URL for the version upgrade procedure.
>
> https://www.ovirt.org/documentation/upgrade_guide/index.html#Updates_between_Minor_Releases
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6SWPMUJEVDQVUZCXXWWWJDYIGVNWHBPT/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6CLSRC7LN5B4YJ3ARQTMYB7IR2JFH2CL/


[ovirt-users] Re: After upgrade only 1/3 hosts is running Node 4.4.6

2021-05-28 Thread Jayme
Removing the ovirt-node-ng-image-update package and re-installing it
manually seems to have done the trick. Thanks for pointing me in the right
direction!

On Thu, May 27, 2021 at 9:57 PM Jayme  wrote:

> # rpm -qa | grep ovirt-node
> ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
> python3-ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
> ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>
> I removed ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch but yum update
> and check for updates in GUI still show no updates available.
>
> I can attempt re-installing the package tomorrow, but I'm not confident it
> will work since it was already installed.
>
>
> On Thu, May 27, 2021 at 9:32 PM wodel youchi 
> wrote:
>
>> Hi,
>>
>> On the "bad hosts" try to find if there is/are any 4.4.6 rpm installed,
>> if yes, try to remove them, then try the update again.
>>
>> You can try to install the ovirt-node rpm manually, here is the link
>> https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>>
>>> # dnf install ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>>>
>>
>> PS: remember to use tmux if executing via ssh.
>>
>> Regards.
>>
>> Le jeu. 27 mai 2021 à 22:21, Jayme  a écrit :
>>
>>> The good host:
>>>
>>> bootloader:
>>>   default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
>>>   entries:
>>> ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64):
>>>   index: 0
>>>   kernel:
>>> /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/vmlinuz-4.18.0-301.1.el8.x86_64
>>>   args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
>>> rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1 
>>> rd.lvm.lv=onn_orchard1/swap
>>> rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
>>> img.bootid=ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>>   root: /dev/onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>>   initrd:
>>> /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/initramfs-4.18.0-301.1.el8.x86_64.img
>>>   title: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
>>>   blsid: ovirt-node-ng-4.4.6.3-0.20210518.0+1-4.18.0-301.1.el8.x86_64
>>> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
>>>   index: 1
>>>   kernel:
>>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
>>>   args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
>>> rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1 
>>> rd.lvm.lv=onn_orchard1/swap
>>> rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
>>> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>>   root: /dev/onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>>   initrd:
>>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
>>>   title: ovirt-node-ng-4.4.5.1-0.20210323.0
>>> (4.18.0-240.15.1.el8_3.x86_64)
>>>   blsid:
>>> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
>>> layers:
>>>   ovirt-node-ng-4.4.5.1-0.20210323.0:
>>> ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>>   ovirt-node-ng-4.4.6.3-0.20210518.0:
>>> ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>> current_layer: ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>>
>>>
>>> The other two show:
>>>
>>> bootloader:
>>>   default: ovirt-node-ng-4.4.5.1-0.20210323.0
>>> (4.18.0-240.15.1.el8_3.x86_64)
>>>   entries:
>>> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
>>>   index: 0
>>>   kernel:
>>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
>>>   args: crashkernel=auto resume=/dev/mapper/onn_orchard2-swap
>>> rd.lvm.lv=onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1 
>>> rd.lvm.lv=onn_orchard2/swap
>>> rhgb quiet boot=UUID=cd9dd412-2acd-4f3d-9b3e-44030153856f rootflags=discard
>>> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>>   root: /dev/onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>>   initrd:
>>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
>>>   title: ovirt-node-ng-4.4.5.1-0.20210323.0
>>> (4.18.0-240.15.1.el8_3.x86_64)
>>>   blsid:
>>> ovirt-node-ng-4.4.5.1-0.20210323.0+1

[ovirt-users] Re: After upgrade only 1/3 hosts is running Node 4.4.6

2021-05-27 Thread Jayme
# rpm -qa | grep ovirt-node
ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
python3-ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch

I removed ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch but yum update
and check for updates in GUI still show no updates available.

I can attempt re-installing the package tomorrow, but I'm not confident it
will work since it was already installed.


On Thu, May 27, 2021 at 9:32 PM wodel youchi  wrote:

> Hi,
>
> On the "bad hosts" try to find if there is/are any 4.4.6 rpm installed, if
> yes, try to remove them, then try the update again.
>
> You can try to install the ovirt-node rpm manually, here is the link
> https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>
>> # dnf install ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>>
>
> PS: remember to use tmux if executing via ssh.
>
> Regards.
>
> Le jeu. 27 mai 2021 à 22:21, Jayme  a écrit :
>
>> The good host:
>>
>> bootloader:
>>   default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
>>   entries:
>> ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64):
>>   index: 0
>>   kernel:
>> /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/vmlinuz-4.18.0-301.1.el8.x86_64
>>   args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
>> rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1 
>> rd.lvm.lv=onn_orchard1/swap
>> rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
>> img.bootid=ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>   root: /dev/onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>   initrd:
>> /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/initramfs-4.18.0-301.1.el8.x86_64.img
>>   title: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
>>   blsid: ovirt-node-ng-4.4.6.3-0.20210518.0+1-4.18.0-301.1.el8.x86_64
>> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
>>   index: 1
>>   kernel:
>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
>>   args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
>> rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1 
>> rd.lvm.lv=onn_orchard1/swap
>> rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
>> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>   root: /dev/onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>   initrd:
>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
>>   title: ovirt-node-ng-4.4.5.1-0.20210323.0
>> (4.18.0-240.15.1.el8_3.x86_64)
>>   blsid:
>> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
>> layers:
>>   ovirt-node-ng-4.4.5.1-0.20210323.0:
>> ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>   ovirt-node-ng-4.4.6.3-0.20210518.0:
>> ovirt-node-ng-4.4.6.3-0.20210518.0+1
>> current_layer: ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>
>>
>> The other two show:
>>
>> bootloader:
>>   default: ovirt-node-ng-4.4.5.1-0.20210323.0
>> (4.18.0-240.15.1.el8_3.x86_64)
>>   entries:
>> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
>>   index: 0
>>   kernel:
>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
>>   args: crashkernel=auto resume=/dev/mapper/onn_orchard2-swap
>> rd.lvm.lv=onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1 
>> rd.lvm.lv=onn_orchard2/swap
>> rhgb quiet boot=UUID=cd9dd412-2acd-4f3d-9b3e-44030153856f rootflags=discard
>> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>   root: /dev/onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>   initrd:
>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
>>   title: ovirt-node-ng-4.4.5.1-0.20210323.0
>> (4.18.0-240.15.1.el8_3.x86_64)
>>   blsid:
>> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
>> layers:
>>   ovirt-node-ng-4.4.5.1-0.20210323.0:
>> ovirt-node-ng-4.4.5.1-0.20210323.0+1
>> current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>
>> On Thu, May 27, 2021 at 6:18 PM Jayme  wrote:
>>
>>> It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows
>>> noting available nor does check upgrade in admin GUI.
>>>
>>> I believe these two hosts failed on first install and succeeded on
>>> second attempt which may have something to do with it. How can I for

[ovirt-users] Re: After upgrade only 1/3 hosts is running Node 4.4.6

2021-05-27 Thread Jayme
The good host:

bootloader:
  default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
  entries:
ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64):
  index: 0
  kernel:
/boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/vmlinuz-4.18.0-301.1.el8.x86_64
  args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
rd.lvm.lv=onn_orchard1/swap rhgb quiet
boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
img.bootid=ovirt-node-ng-4.4.6.3-0.20210518.0+1
  root: /dev/onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
  initrd:
/boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/initramfs-4.18.0-301.1.el8.x86_64.img
  title: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
  blsid: ovirt-node-ng-4.4.6.3-0.20210518.0+1-4.18.0-301.1.el8.x86_64
ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
  index: 1
  kernel:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
  args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
rd.lvm.lv=onn_orchard1/swap rhgb quiet
boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
  root: /dev/onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
  initrd:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
  title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
  blsid:
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
layers:
  ovirt-node-ng-4.4.5.1-0.20210323.0:
ovirt-node-ng-4.4.5.1-0.20210323.0+1
  ovirt-node-ng-4.4.6.3-0.20210518.0:
ovirt-node-ng-4.4.6.3-0.20210518.0+1
current_layer: ovirt-node-ng-4.4.6.3-0.20210518.0+1


The other two show:

bootloader:
  default: ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64)
  entries:
ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
  index: 0
  kernel:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
  args: crashkernel=auto resume=/dev/mapper/onn_orchard2-swap
rd.lvm.lv=onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
rd.lvm.lv=onn_orchard2/swap rhgb quiet
boot=UUID=cd9dd412-2acd-4f3d-9b3e-44030153856f rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
  root: /dev/onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
  initrd:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
  title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
  blsid:
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
layers:
  ovirt-node-ng-4.4.5.1-0.20210323.0:
ovirt-node-ng-4.4.5.1-0.20210323.0+1
current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1

On Thu, May 27, 2021 at 6:18 PM Jayme  wrote:

> It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows
> noting available nor does check upgrade in admin GUI.
>
> I believe these two hosts failed on first install and succeeded on second
> attempt which may have something to do with it. How can I force them to
> update to 4.4.6 image? Would reinstall host do it?
>
> On Thu, May 27, 2021 at 6:03 PM wodel youchi 
> wrote:
>
>> Hi,
>>
>> What does "nodectl info" reports on all hosts?
>> did you execute "refresh capabilities" after the update?
>>
>> Regards.
>>
>>
>> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
>>  Virus-free.
>> www.avast.com
>> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
>> <#m_-2192448828611170138_m_1909242515811637061_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>
>> Le jeu. 27 mai 2021 à 20:37, Jayme  a écrit :
>>
>>> I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
>>> updated successfully and rebooted and are active. I notice that only one
>>> host out of the three is actually running oVirt node 4.4.6 and the other
>>> two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
>>> available.
>>>
>>> Why are two hosts still running 4.4.5 after being successfully
>>> upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
>>> found?
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:

[ovirt-users] Re: After upgrade only 1/3 hosts is running Node 4.4.6

2021-05-27 Thread Jayme
It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows noting
available nor does check upgrade in admin GUI.

I believe these two hosts failed on first install and succeeded on second
attempt which may have something to do with it. How can I force them to
update to 4.4.6 image? Would reinstall host do it?

On Thu, May 27, 2021 at 6:03 PM wodel youchi  wrote:

> Hi,
>
> What does "nodectl info" reports on all hosts?
> did you execute "refresh capabilities" after the update?
>
> Regards.
>
>
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
>  Virus-free.
> www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
> <#m_1909242515811637061_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> Le jeu. 27 mai 2021 à 20:37, Jayme  a écrit :
>
>> I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
>> updated successfully and rebooted and are active. I notice that only one
>> host out of the three is actually running oVirt node 4.4.6 and the other
>> two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
>> available.
>>
>> Why are two hosts still running 4.4.5 after being successfully
>> upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
>> found?
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UN57DRLYE3OIOP7O3SPKH7P5SHB4XJRJ/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GRRCKNB2DLKBHNHAUXXDO3DZ7YXVK2UJ/


[ovirt-users] After upgrade only 1/3 hosts is running Node 4.4.6

2021-05-27 Thread Jayme
I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
updated successfully and rebooted and are active. I notice that only one
host out of the three is actually running oVirt node 4.4.6 and the other
two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
available.

Why are two hosts still running 4.4.5 after being successfully
upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
found?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UN57DRLYE3OIOP7O3SPKH7P5SHB4XJRJ/


[ovirt-users] Re: Unable to migrate VMs

2021-05-27 Thread Jayme
The problem appears to be MTU related, I may have a network configuration
problem. Setting back to 1500 mtu seems to have solved it for now

On Thu, May 27, 2021 at 2:26 PM Jayme  wrote:

> I've gotten a bit further. I have a separate 10Gbe network for GlusterFS
> traffic which was also set as the migration network. I disabled migration
> on GlusterFS network and enabled on default management network and now
> migration seems to be working. I'm not sure why at this point, it used to
> work fine on GlusterFS migration network in the past.
>
> On Thu, May 27, 2021 at 2:11 PM Jayme  wrote:
>
>> I have a three node oVirt 4.4.5 cluster running oVirt node hosts. Storage
>> is mix of GlusterFS and NFS. Everything has been running smoothly, but the
>> other day I noticed many VMs had invalid snapshots. I run a script to
>> export OVA for VMs for backup purposes, exports seemed to have been fine
>> but snapshots failed to delete at the end. I was able to manually delete
>> the snapshots through oVirt admin GUI without any errors/warnings and the
>> VMs have been running fine and can restart them without problems.
>>
>> I thought this problem may be due to snapshot bug which is supposedly
>> fixed in oVirt 4.4.6. I decided to start upgrading cluster to 4.4.6 and am
>> now having a problem with VMs not being able to migrate.
>>
>> When I migrate any VM (doesn't seem to matter which host to and from) the
>> process starts but stops at 0-1%. Eventually after 15-30 minutes or more
>> the tasks are all completed by the VM is not migrated.
>>
>> I am unable to migrate any VMs and as such I cannot place any host in
>> maintenance mode.
>>
>> I've attaching some VDSM logs from source and destination hosts, these
>> were after initiating a migration of a single VM
>>
>> I'm seeing some errors in the logs regarding the migration stalling, but
>> not able to determine why its stalling.
>>
>> 2021-05-27 17:10:22,167+ INFO  (jsonrpc/4) [api.host] FINISH
>> getAllVmIoTunePolicies return={'status': {'code': 0, 'message': 'Done'},
>> 'io_tune_policies_dict': {'f8f4e4a1-b565-4663-8962-c8804dbb86fb':
>> {'policy': [], 'current_values': [{'name': 'vda', 'path':
>> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme1n1/bce04425-1d25-4489-bdab-2834a1a57db8/images/38b27cce-c744-4a12-85a3-3af07d386da2/93c1e793-f8cb-42c9-86a6-0e9ce4a6023a',
>> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
>> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
>> '2b87204f-f695-474a-9f08-47b85fcac366': {'policy': [], 'current_values':
>> [{'name': 'sda', 'path': 
>> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/f2e0c9f3-ab0d-441a-85a6-07a42e78b5a8/848f353e-6787-4e20-ab7b-0541ebd852c6',
>> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
>> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
>> '26332421-54a3-4afc-90e7-551a7e314c80': {'policy': [], 'current_values':
>> [{'name': 'vda', 'path': 
>> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/b7a785f9-307b-42af-9bbe-23cac884fe97/ed1d027e-a36a-4e6b-9207-119915044e06',
>> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
>> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
>> '60edbd80-dad7-4bf8-8fd1-e138413cf9f6': {'policy': [], 'current_values':
>> [{'name': 'sda', 'path': 
>> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/535fcb2e-ece9-4d50-86fe-bf6264d11ae1/6c01a036-8a14-46ba-a4b4-fe4f66a586a3',
>> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
>> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
>> 'sdb', 'path': 
>> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/1f467fb5-5ea7-42ba-bace-f175c86791b2/cbe8327f-9b7f-442f-a650-6888bb11a674',
>> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
>> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
>> 'sdd', 'path': 
>> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/c93956d5-c88d-41f9-8c38-9f5f62cc90dd/3920b46c-5fab-4b63-b47f-2fa5c6714c36',
>> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
>> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
>> 'beeefe06-78a0-4e14-a932-cc8d734d542d': {'policy': [], 'current_values':
>> [{'name': 'sda', 'path':
>> '/rhev/data-center/mnt/glusterSD

[ovirt-users] Re: Unable to migrate VMs

2021-05-27 Thread Jayme
I've gotten a bit further. I have a separate 10Gbe network for GlusterFS
traffic which was also set as the migration network. I disabled migration
on GlusterFS network and enabled on default management network and now
migration seems to be working. I'm not sure why at this point, it used to
work fine on GlusterFS migration network in the past.

On Thu, May 27, 2021 at 2:11 PM Jayme  wrote:

> I have a three node oVirt 4.4.5 cluster running oVirt node hosts. Storage
> is mix of GlusterFS and NFS. Everything has been running smoothly, but the
> other day I noticed many VMs had invalid snapshots. I run a script to
> export OVA for VMs for backup purposes, exports seemed to have been fine
> but snapshots failed to delete at the end. I was able to manually delete
> the snapshots through oVirt admin GUI without any errors/warnings and the
> VMs have been running fine and can restart them without problems.
>
> I thought this problem may be due to snapshot bug which is supposedly
> fixed in oVirt 4.4.6. I decided to start upgrading cluster to 4.4.6 and am
> now having a problem with VMs not being able to migrate.
>
> When I migrate any VM (doesn't seem to matter which host to and from) the
> process starts but stops at 0-1%. Eventually after 15-30 minutes or more
> the tasks are all completed by the VM is not migrated.
>
> I am unable to migrate any VMs and as such I cannot place any host in
> maintenance mode.
>
> I've attaching some VDSM logs from source and destination hosts, these
> were after initiating a migration of a single VM
>
> I'm seeing some errors in the logs regarding the migration stalling, but
> not able to determine why its stalling.
>
> 2021-05-27 17:10:22,167+ INFO  (jsonrpc/4) [api.host] FINISH
> getAllVmIoTunePolicies return={'status': {'code': 0, 'message': 'Done'},
> 'io_tune_policies_dict': {'f8f4e4a1-b565-4663-8962-c8804dbb86fb':
> {'policy': [], 'current_values': [{'name': 'vda', 'path':
> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme1n1/bce04425-1d25-4489-bdab-2834a1a57db8/images/38b27cce-c744-4a12-85a3-3af07d386da2/93c1e793-f8cb-42c9-86a6-0e9ce4a6023a',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
> '2b87204f-f695-474a-9f08-47b85fcac366': {'policy': [], 'current_values':
> [{'name': 'sda', 'path': 
> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/f2e0c9f3-ab0d-441a-85a6-07a42e78b5a8/848f353e-6787-4e20-ab7b-0541ebd852c6',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
> '26332421-54a3-4afc-90e7-551a7e314c80': {'policy': [], 'current_values':
> [{'name': 'vda', 'path': 
> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/b7a785f9-307b-42af-9bbe-23cac884fe97/ed1d027e-a36a-4e6b-9207-119915044e06',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
> '60edbd80-dad7-4bf8-8fd1-e138413cf9f6': {'policy': [], 'current_values':
> [{'name': 'sda', 'path': 
> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/535fcb2e-ece9-4d50-86fe-bf6264d11ae1/6c01a036-8a14-46ba-a4b4-fe4f66a586a3',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
> 'sdb', 'path': 
> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/1f467fb5-5ea7-42ba-bace-f175c86791b2/cbe8327f-9b7f-442f-a650-6888bb11a674',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
> 'sdd', 'path': 
> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/c93956d5-c88d-41f9-8c38-9f5f62cc90dd/3920b46c-5fab-4b63-b47f-2fa5c6714c36',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
> 'beeefe06-78a0-4e14-a932-cc8d734d542d': {'policy': [], 'current_values':
> [{'name': 'sda', 'path':
> '/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/310d8b3e-d578-418d-9802-dc0ebcea06d6/aa758c51-8478-4273-aeef-d4b374b8d6b4',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
> 'sdb', 'path':
> '/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d

[ovirt-users] Unable to migrate VMs

2021-05-27 Thread Jayme
I have a three node oVirt 4.4.5 cluster running oVirt node hosts. Storage
is mix of GlusterFS and NFS. Everything has been running smoothly, but the
other day I noticed many VMs had invalid snapshots. I run a script to
export OVA for VMs for backup purposes, exports seemed to have been fine
but snapshots failed to delete at the end. I was able to manually delete
the snapshots through oVirt admin GUI without any errors/warnings and the
VMs have been running fine and can restart them without problems.

I thought this problem may be due to snapshot bug which is supposedly fixed
in oVirt 4.4.6. I decided to start upgrading cluster to 4.4.6 and am now
having a problem with VMs not being able to migrate.

When I migrate any VM (doesn't seem to matter which host to and from) the
process starts but stops at 0-1%. Eventually after 15-30 minutes or more
the tasks are all completed by the VM is not migrated.

I am unable to migrate any VMs and as such I cannot place any host in
maintenance mode.

I've attaching some VDSM logs from source and destination hosts, these were
after initiating a migration of a single VM

I'm seeing some errors in the logs regarding the migration stalling, but
not able to determine why its stalling.

2021-05-27 17:10:22,167+ INFO  (jsonrpc/4) [api.host] FINISH
getAllVmIoTunePolicies return={'status': {'code': 0, 'message': 'Done'},
'io_tune_policies_dict': {'f8f4e4a1-b565-4663-8962-c8804dbb86fb':
{'policy': [], 'current_values': [{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme1n1/bce04425-1d25-4489-bdab-2834a1a57db8/images/38b27cce-c744-4a12-85a3-3af07d386da2/93c1e793-f8cb-42c9-86a6-0e9ce4a6023a',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'2b87204f-f695-474a-9f08-47b85fcac366': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/f2e0c9f3-ab0d-441a-85a6-07a42e78b5a8/848f353e-6787-4e20-ab7b-0541ebd852c6',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'26332421-54a3-4afc-90e7-551a7e314c80': {'policy': [], 'current_values':
[{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/b7a785f9-307b-42af-9bbe-23cac884fe97/ed1d027e-a36a-4e6b-9207-119915044e06',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'60edbd80-dad7-4bf8-8fd1-e138413cf9f6': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/535fcb2e-ece9-4d50-86fe-bf6264d11ae1/6c01a036-8a14-46ba-a4b4-fe4f66a586a3',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdb', 'path': 
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/1f467fb5-5ea7-42ba-bace-f175c86791b2/cbe8327f-9b7f-442f-a650-6888bb11a674',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdd', 'path': 
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/c93956d5-c88d-41f9-8c38-9f5f62cc90dd/3920b46c-5fab-4b63-b47f-2fa5c6714c36',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'beeefe06-78a0-4e14-a932-cc8d734d542d': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/310d8b3e-d578-418d-9802-dc0ebcea06d6/aa758c51-8478-4273-aeef-d4b374b8d6b4',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdb', 'path':
'/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/4072fda1-ec82-45c9-b353-91fceb13bf08/891f5982-dead-48b4-8907-caa1e309fa82',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'7e5156de-649d-4904-9092-21a699242a37': {'policy': [], 'current_values':
[{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/ca0c1208-a7aa-4ef6-a450-4a40bd4455f3/a2335199-ddd4-429b-b55d-f4d527081fd3',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]}}}
from=::1,35012 (api:54)
2021-05-27 17:10:31,118+ WARN  

[ovirt-users] Re: HCI - oVirt for CEPH

2021-04-26 Thread Jayme
What do you mean Gluster being announced as EOL? Where did you find this
information?

On Mon, Apr 26, 2021 at 9:34 AM penguin pages 
wrote:

>
> I have been building out HCI stack with KVM/RHEV + oVirt with the HCI
> deployment process.  This is very nice for small / remote site use cases,
> but with Gluster being anounced as EOL in 18 months, what is the
> replacement plan?
>
> Are their working projects and plans to replace Gluster with CEPH?
> Are their deployment plans to get an HCI stack onto a supported file
> system?
>
> I liked gluster for the control plan for the oVirt engine and smaller
> utility VMs as each system has a full copy, I can retrieve /extract a copy
> of the VM without having all bricks back... it was just "easy" to use.
> CEPH just means more complexity.. and though it scales better and has
> better features, it means that repair means having critical mass of nodes
> up before you can extra data (vs any disk can be pulled out of a gluster
> node, plugged into my laptop and I can at least extract the data).
>
> I guess I am not trying to debate shifting to CEPH.. it does not matter..
> that ship sailed...  What I am asking is when / what are the plans for
> replacement of Gluster for HCI.  Because right now, for small sites for
> HCI, when Gluster is no longer supported.. and CEPH does not make it... is
> to go VMWare and vSAN or some other total different stack.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJHR2GFVIVHQBYDF2SU4KUVH5RXFMVOE/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K2UTOLJQQUPUORVDSXOTBOML52XVNWQY/


[ovirt-users] Re: What software used to take forever incremental backup from VM?

2021-04-18 Thread Jayme
Vprotect would be worth looking into

On Sun, Apr 18, 2021 at 3:23 AM  wrote:

> Hi there,
>
> I want forever incremental backup for over 150+ virtual machines inside
> oVirt to save more backup space, then restore in case some problem occurs,
> any good advice?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQOZNU3QW4NETGX2BAXFDGW3NIVZBO22/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GKT6FR3L4NZS4TS5TLE2SB2ZB2ILDLAI/


[ovirt-users] Re: How do I share a disk across multiple VMs?

2021-04-15 Thread Jayme
David, I’m curious what the use case is. :9 you plan on using the disk with
three vms at the same time?  This isn’t really what shareable disks are
meant to do afaik. If you want to share storage with multiple vms I’d
probably just setup an nfs share on one of the vms

On Thu, Apr 15, 2021 at 7:37 PM David White via Users 
wrote:

> I found the proper documentation at
> https://www.ovirt.org/documentation/administration_guide/#Shareable_Disks.
>
> When I tried to edit the disk, I see that sharable is grayed out, and when
> I hover my mouse over it, I see "Sharable Storage is not supported on
> Gluster/Offload Domain".
>
> So to confirm, is there any circumstance where a Gluster volume can
> support sharable storage? Unfortunately, I don't have any other storage
> available, and I chose to use Gluster, so that I could have a HA
> environment.
>
>
> Sent with ProtonMail  Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Thursday, April 15, 2021 5:05 PM, David White via Users <
> users@ovirt.org> wrote:
>
> I need to mount a partition across 3 different VMs.
> How do I attach a disk to multiple VMs?
>
> This looks like fairly old documentation-not-documentation:
> https://www.ovirt.org/develop/release-management/features/storage/sharedrawdisk.html
>
>
> Sent with ProtonMail  Secure Email.
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZY6OJWBH5KAB5H2XXYJOVI7BLR4Z67F/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XAQI4WTRSPEECHZV3NEQLUMHRCEPUEZ/


[ovirt-users] Re: what host os version or ovirt node version I choose for my production setup

2021-04-13 Thread Jayme
If it's a smaller setup one option might be to use RHEL. A developers
account with Redhat will allow for 16 licensed servers for free.

On Mon, Apr 12, 2021 at 4:07 AM dhanaraj.ramesh--- via Users <
users@ovirt.org> wrote:

> We had done successful POC of ovirt node & HE with 4.4.5 version and now
> planning for production. However since 4.4.5 ovirt node& HE  based on
> centOS 8.3, yet Red hat announced CentOs8.3 EOL by End of this year, is
> there any progress/plan to release ovirt nodes with CentOS stream soon?
>
> if we choose to go with Centos 8.3 based oVirt node, what could be the
> consequences in near term especially when it's come to next production
> release upgrade?
>
> Kindly suggest me which Host version I should choose for Production setup?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7MHZAYYEVXC42GRSY6H5XTM6Q27JOHKJ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V4UBJ523M42RY6DHUUFY6WDBTTEJUTOZ/


[ovirt-users] Re: Is it possible to upgrade 3 node HCI from 4.3 to 4.4?

2021-03-25 Thread Jayme
After setting one host to maintenance to reinstall with EL8 is anything
else required such as reducing gluster?

On Thu, Mar 25, 2021 at 2:18 PM Strahil Nikolov 
wrote:

> Yes you can upgrade without a 4th host.
>
> Actually, first update to latest 4.3.10 or your backup won't be valid for
> update to 4.4
>
> You will need to set one of the hosts into maintenance and then reinstall
> that host with EL8 (CentOS or another).
> Once you install that node, you need to readd it to the Gluster Cluster
> and then deploy from backup the new engine (use new gluster volume) using
> that EL8 host.
>
> Once it's successful, the rest of the hosts should be available and you
> will be able to remove one of the other nodes, reduce gluster, reinstall ,
> get gluster running and add the host again in oVirt.
>
>
> Best Regards,
> Strahil Nikolov
>
>
> On Tue, Mar 23, 2021 at 19:01, Jayme
>  wrote:
> I have a fairly stock three node HCI setup running oVirt 4.3.9. The hosts
> are oVirt node. I'm using GlusterFS storage for the self hosted engine and
> for some VMs. I also have some other VMs running from an external NFS
> storage domain.
>
> Is it possible for me to upgrade this environment to 4.4 while keeping
> some of the VMs on GlusterFS storage domains running? Does the upgrade
> require an additional physical host? I'm reading through the upgrade guide
> and it's not very clear to me how the engine is reinstalled on CentOS8. Do
> you need a 4th host, or do you take one of the three existing hosts down,
> wipe it and install ovirt node 4.4?
>
> Would it be easier for me to just move all VMs to my NFS storage domain,
> wipe all hosts and deploy a brand new 4.4 HCI cluster then import the NFS
> storage domain? Will the 4.3 VMs on the NFS storage domain import/run
> properly into the new 4.4 deployment or are there any other considerations?
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/62ZSJ5IMZ43HCM356DJH6D3M76F3TGJF/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3MCVU3COMB3SDXJIHL3L5DOO434DWPEE/


[ovirt-users] Is it possible to upgrade 3 node HCI from 4.3 to 4.4?

2021-03-23 Thread Jayme
I have a fairly stock three node HCI setup running oVirt 4.3.9. The hosts
are oVirt node. I'm using GlusterFS storage for the self hosted engine and
for some VMs. I also have some other VMs running from an external NFS
storage domain.

Is it possible for me to upgrade this environment to 4.4 while keeping some
of the VMs on GlusterFS storage domains running? Does the upgrade require
an additional physical host? I'm reading through the upgrade guide and it's
not very clear to me how the engine is reinstalled on CentOS8. Do you need
a 4th host, or do you take one of the three existing hosts down, wipe it
and install ovirt node 4.4?

Would it be easier for me to just move all VMs to my NFS storage domain,
wipe all hosts and deploy a brand new 4.4 HCI cluster then import the NFS
storage domain? Will the 4.3 VMs on the NFS storage domain import/run
properly into the new 4.4 deployment or are there any other considerations?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/62ZSJ5IMZ43HCM356DJH6D3M76F3TGJF/


[ovirt-users] Re: What do some of the terms mean in the documentation?

2021-03-22 Thread Jayme
The hosts run the vms. The engine just basically coordinates everything.

On Sun, Mar 21, 2021 at 8:50 PM jenia mtl  wrote:

> Hi Edward.
>
> "Therein" meaning inside the engine? The virtualization hosts run inside
> the engine not inside the hypervision/Ovirt-node? And just to make sure,
> the virtualization hosts are the VMs that ultimately run the apps?
>
> Thanks
> Evgeniy Ivlev
>
> On Sun, Mar 21, 2021 at 5:51 PM Edward Berger  wrote:
>
>> oVirt node is a dedicated hypervisor imgbased installer distro for oVirt
>> that handles upgrades all at once with a rollback feature, whereas an
>> Enterprise Linux Host would do the same thing but you would partition your
>> disks however you saw fit and manually install RPMs to get the same
>> functionality as a node-ng host.  No need to have both type of hypervisor
>> hosts.  The engine is sometimes a VM itself which provides the Web GUI and
>> databases needed to manage a cluster of virtualization hosts and VMs
>> therein.
>>
>>
>> On Sun, Mar 21, 2021 at 10:31 AM  wrote:
>>
>>> So in the documentation it says that the oVirt Engine is a user
>>> interface and a REST API endpoint. Ok, that's clear enough.
>>>
>>> As far as the oVirt hosts there are two types Enterprise Linux hosts and
>>> oVirt Nodes. Do the Enterprise Linux hosts run the VMs for my apps or the
>>> oVirt nodes?
>>>
>>> What are their respective roles? Do I need both?
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C24YO2HVDZYPJ46OEJZJ4YDFN6B5N3SN/
>>>
>> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HD5R5WCDJI72IW5EKCOIA37AEPF2I4RQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R7YMM7ZORKLHBRGA47QB7L7DEXK3C2DU/


[ovirt-users] Re: Hyperconverged engine high availability?

2021-03-20 Thread Jayme
If you deployed with wizard the hosted engine should already be HA and can
run on any host. I’d you look at GUI you will see a crown beside each host
that is capable of running the hostess engine.

On Sat, Mar 20, 2021 at 5:14 PM David White via Users 
wrote:

> I just finished deploying oVirt 4.4.5 onto a 3-node hyperconverged cluster
> running on Red Hat 8.3 OS.
>
> Over the course of the setup, I noticed that I had to setup the storage
> for the engine separately from the gluster bricks.
>
> It looks like the engine was installed onto /rhev/data-center/ on the
> first host, whereas the gluster bricks for all 3 hosts are on
> /gluster_bricks/.
>
> I fear that I may already know the answer to this, but:
> Is it possible to make the engine highly available?
>
> Also, thinking hypothetically here, what would happen to my VMs that are
> physically on the first server, if the first server crashed? The engine is
> what handles the high availability, correct? So what if a VM was running on
> the first host? There would be nothing to automatically "move" it to one of
> the remaining healthy hosts.
>
> Or am I misunderstanding something here?
>
>
> Sent with ProtonMail  Secure Email.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/L6MMZSMSGIK7BTUSUECU65VZRMS4N33L/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2KHQ4XA4H2FZ3C73CAYES3IOXI7YWTV/


[ovirt-users] Re: 2 Node oVirt Cluster

2021-02-24 Thread Jayme
Are you trying to set this up as a hci deployment? If so it might be
failing if the raspberry Pi cpu is not supported by ovirt

On Wed, Feb 24, 2021 at 3:08 AM  wrote:

> Hey there,
>
> I tried using oVirt for some time now and like to convert my main Proxmox
> Cluster (2 Nodes + RPI for Quorum) to oVirt. But I obvioulsy need three
> servers to do HA. Now I am asking myself: can't I just use the PI as an
> arbiter node for the Selfhosted-Engine on gluster? I tried running the
> playbook already, but it's always failing and even if the gluster
> deployment would go through, it would most likely fail aftwards, right?
> Before I waste too much energy and time into that, does anyone of you guys
> use a 2 node cluster? And no I can't add another node (energy + space +
> money)!
> Also I know with doing that I'm totally on my own etcetera
>
> Thanks!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYIIHKLPK42OVKPU5CMYIWNTSLLOKCEJ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KAMRQP6TMOH7YD7LONPBRNLFRYRT42FY/


[ovirt-users] Re: Add single node to running gluster cluster

2021-02-22 Thread Jayme
I believe you'd need  to add in multiples of 3 hosts to expand gluster
storage

On Mon, Feb 22, 2021 at 3:56 AM  wrote:

> Ok, thanks for your answer.
>
> If I understand well, I can't expand my gluster storage ?
>
> My goal was add a node when I can to growing up my gluster and my compute
> ability.
>
> Are there a way to do it ? I've read some documentations about it, but i
> haven't succeed to achieve it.
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXPKK6R3PYC347MMGPJRUM35VAURC562/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BAV5FEE6YAQ45SLEVEEFBH6YWQG2ASCD/


[ovirt-users] Re: Mutually exclusive VMs

2021-01-20 Thread Jayme
Take a look at configuring affinity rules

On Wed, Jan 20, 2021 at 7:49 PM Shantur Rathore  wrote:

> Hi all,
>
> I am trying to figure if there is a way to force oVirt to schedule VMs on
> different hosts.
> So if I am cloning 6 VMs from a template, I want oVirt to schedule them on
> all different hosts.
> No 2 VMs must schedule on the same host.
> I know that there is pin-to-host functionality but I don't want to
> manually pin it to hosts.
>
> Thanks
> Shantur
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FZ7N7VRTI6BMHX53A3M3ST6U6P7DXFGA/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/72ZK2BI5H22JKT5GVAV7S64AVHVX5VRC/


[ovirt-users] Re: potential split-brain after upgrading Gluster version and rebooting one of three storage nodes

2021-01-11 Thread Jayme
Correct me if I'm wrong but according to the docs, there might be a more
elegant way of doing something similar with gluster cli ex: gluster volume
heal  split-brain latest-mtime  -- although I have never
tried it myself.

On Mon, Jan 11, 2021 at 1:50 PM Strahil Nikolov via Users 
wrote:

>
> > Is this a split brain situation and how can I solve this? I would be
> > very grateful for any help.
> I've seen it before. Just check on the nodes which brick contains the
> newest file (there is a timestamp inside it) and then rsync that file
> from the node with newest version to the rest.
> If gluster keeps showing that the file is still needing heal - just
> "cat" it from the FUSE client (the mountpoint in /rhev/).
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2GLJIZQLFUZFSIVAVFFNG4CJZHNY7HFP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DRZ76K554HN24ZVSVR7D2LGCUAICQCLR/


[ovirt-users] Re: Ovirt Cluster SSH key exchange

2021-01-07 Thread Jayme
It takes a very small amount of effort to do it one time using ssh-copy-id
but I suppose you could easily do it with Ansible too.


On Thu, Jan 7, 2021 at 11:42 AM marcel d'heureuse 
wrote:

> hi,
>
> we have setup a ovirt system with 9 hosts.
> we will now add three more nodes and we have to exchange from each host
> the SSH public keys to the other one.
>
> we use ovirt 4.3 on centos 7.
>
> how did you exchange the SSH keys? did you have ansible to do this or
> other tools?
>
> ssh-copy-id is not the best process here and waste time.
>
> br
> marcel
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IX6XAFYF36SQDX5DBE4IGJ4BSFOCVJZQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZASFJJYHG3VWQJTK5K3PJKSH2W2DE6M6/


[ovirt-users] Re: CentOS 8 is dead

2020-12-10 Thread Jayme
It looks like a few forks are popping up already. A new project called
RockyLinux and now CloudLinux announced an RHEL fork today which sounds
promising:
https://blog.cloudlinux.com/announcing-open-sourced-community-driven-rhel-fork-by-cloudlinux

On Thu, Dec 10, 2020 at 5:42 AM Jorick Astrego  wrote:

> Hi,
>
> Personally I don't really see the problem with the CentOS stream switch.
> Not trying to start a long discussion but I think it will be even be an
> improvement.
>
> Currently we use different combinations of EPEL, SCL, Elrepo etc. just
> to get some newer packages and a lot of people do the same and have no
> issues with this. oVirt even uses EPEL packages as dependency.
>
> Most of these will become redundant because of stream...
>
> Actually Red Hat has the same strategy for oVirt, it's an upstream for
> Red Hat Virtualization. So with the new CentOS strategy you will be one
> step ahead on OS and virtualization manager of the paid version.
>
> Testing is always required and with tooling like Katello you can push
> the packages after testing to production easily. If you need enterprise
> grade stability and support that much, then you should buy it or hire
> people to do it in house.
>
> Just my 2c as I see a lot of people getting really worked up about it.
>
> Jorick Astrego
>
> On 12/9/20 2:25 PM, Michal Skrivanek wrote:
> >
> >> On 9 Dec 2020, at 01:21, thilb...@generalpacific.com wrote:
> >>
> >> I to would like to see if Ubuntu could become a bit more main stream
> with oVirt now that CentOS is gone. I'm sure we won't hear anything until
> 2021 the oVirt staff need to figure out what to do now.
> > Right now we’re happy that CentOS 8.3 is finally here. That aligns 4.4.3
> and 4.4.4 again, makes the 4.5 cluster level usable, tons of bug fixes.
> > Afterwards…well, I think Stream is not a bad option, we already have it
> in some form. I suppose it’s going to be the most feasible option.
> > For anything else *someone* would need to do all the work. And I don’t
> mean it in a way that we - all the people with @redhat.com address - are
> forbidden to do that or something, it’s really about the sheer amount of
> work and dedication required, doubling the integration efforts. oVirt is
> (maybe surprisingly) complex and testing it on any new platform means a lot
> of extra manpower.
> >
> >
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7HHQH2XIHK2VPV4TTERO2NH7DGEYUWV4/
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCX24N4UYHZ6HAIE32D2FV4ZT3BAIZYH/
>
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
>
> *Netbulae Virtualization Experts *
> --
> Tel: 053 20 30 270 i...@netbulae.eu Staalsteden 4-3A KvK 08198180
> Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
> --
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2KYL7E76HKVP2F2JPCCUI6BRUARLAGCT/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W6G42E2O5LCSYJDNUI2WJ2CUSBXKRNND/


[ovirt-users] Re: new hyperconverged setup

2020-12-02 Thread Jayme
Ok yeah that is fairly similar to my setup, except I only have two drives
in each host.

In my case I created completely separate data volume, one per drive. You
could do the same, three data volumes for storage @ 7TB each for example.
On one of the volumes you'll need to split off 100Gb volume for engine, So
you'd end up with something like 2 x 7Tb data volumes, 100Gb engine and
6.9TB data volume or something along those lines.

With multiple non-raided drives I'm not sure if this is the best approach
or if it makes more sense to software raid the drives first. Personally I
like the fact that my volumes are separated.

On Wed, Dec 2, 2020 at 9:13 AM cpo cpo  wrote:

> Once again thanks for the reply.  I have 3 identical drives in EACH host.
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5E2ODTEBMOJIKAPH24FNLPJUKABDXHEC/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ENVH2IPHA2MZSCD4CF5DM3IM2KH7U7E4/


[ovirt-users] Re: new hyperconverged setup

2020-12-02 Thread Jayme
I have two ssds in each host for storage. I ended up using the wizard but
in the wizard I simply added two data volumes

Ex:

storage1 = /dev/sda
storage2 = /dev/sdb

You can create as many storage volumes as you want. You don’t need to have
just one single large volume. You could have just one big volume but I
believe you’d need either hardware or software raid to accomplish that.

Am I correct that you have three drives (to be used for storage) in EACH
host or three drives total across all three hosts?

On Wed, Dec 2, 2020 at 8:47 AM cpo cpo  wrote:

> Thanks for the reply.  I have been told by a couple of different people
> not use software raid with this gluster setup.   Did you end creating your
> volumes in command line or did you follow the wizard?
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BRYP2KRFBFUYO7C3W3XTL4J2FYD33TDY/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I2DP4EM34FYE7ZTDDDM32IFQJYPUKL25/


[ovirt-users] Re: new hyperconverged setup

2020-12-02 Thread Jayme
Personally I also found this confusing when I setup my cluster a while
back. I ended up creating multiple data volumes. One for each drive. You
could probably software raid the drives first and present it to the
deployment wizard as one block device. I’m not sure if deployment wizard
will combine multiple physical drives into one volume for you. Hopefully
someone else can shed more light on this.

With your host specs it sounds like you should be fine to use
dedupe/compression



On Wed, Dec 2, 2020 at 8:10 AM  wrote:

> I am about to setup a new hyperconverged setup with the following
>
> 3 host setup
> 2 x nvme 240gb hd for OS on each host (raid)
> 3 x nvme 7 tb hd for vm storage on each host (no raid card)
> 500gb RAM per each host
>
> I am confused during the gluster setup portion using the web ui setup
> wizard.   When asked for the LV size do I input the maximum size of the
> hard drive on a single host or do I combine the total capacity of the
> matching hard drives on all 3 host?  Example that I would be putting in is
> hard drive: /dev/nvme1p1 capacity 7tb or should it be 21tb (combined
> capacity of the matching single HD on the other host).  Or is there a
> better method you recommend?  And since I am using nvme hard drive would
> you recommend using dedup and compression or no?
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CLIXBOMZ7EEZ54FZKG4A4YCWVFRS2N23/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QFGVZSAM7NXRY5VNCAWTK4AKSOFTVJNJ/


[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-08 Thread Jayme
IMO this is best handled at hardware level with UPS and battery/flash
backed controllers. Can you share more details about your oVirt setup? How
many servers are you working with andare you using replica 3 or replica 3
arbiter?

On Thu, Oct 8, 2020 at 9:15 AM Jarosław Prokopowski 
wrote:

> Hi Guys,
>
> I had a situation 2 times that due to unexpected power outage something
> went wrong and VMs on glusterfs where not recoverable.
> Gluster heal did not help and I could not start the VMs any more.
> Is there a way to make such setup bulletproof?
> Does it matter which volume type I choose - raw or qcow2? Or thin
> provision versus reallocated?
> Any other advise?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRM6H2YENBP3AHQ5JWSFXH6UT6J6SDQS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EG54VXKWJMXY5IQWCHJ4BIG7CL2WEXJC/


[ovirt-users] Re: oVirt Survey Autumn 2020

2020-10-06 Thread Jayme
https://docs.google.com/forms/u/1/d/e/1FAIpQLSdzzh_MSsSq-LSQLauJzuaHC0Va1baXm84A_9XBCIileLNSPQ/viewform?usp=send_form


On Tue, Oct 6, 2020 at 7:28 PM Strahil Nikolov via Users 
wrote:

> Hello All,
>
>
>
> can someone send me the full link (not the short one) as my proxy is
> blocking it :)
>
>
>
> Best Regards,
>
> Strahil Nikolov
>
>
>
>
>
>
>
>
>
>
>
>
>
> В вторник, 6 октомври 2020 г., 15:26:57 Гринуич+3, Sandro Bonazzola <
> sbona...@redhat.com> написа:
>
>
>
>
>
>
>
>
>
>
>
> Just a kind reminder about the survey (https://forms.gle/bPvEAdRyUcyCbgEc7)
> closing on October 18th
>
>
>
> Il giorno mer 23 set 2020 alle ore 11:11 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
> > As we continue to develop oVirt 4.4, the Development and Integration
> teams at Red Hat would value insights on how you are deploying the oVirt
> environment.
>
> > Please help us to hit the mark by completing this short survey.
>
> > The survey will close on October 18th 2020. If you're managing multiple
> oVirt deployments with very different use cases or very different
> deployments you can consider answering this survey multiple times.
>
> >
>
> > Please note the answers to this survey will be publicly accessible.
>
> > This survey is under oVirt Privacy Policy available at
> https://www.ovirt.org/site/privacy-policy.html .
>
>
>
> and the privacy link was wrong, the right one:
> https://www.ovirt.org/privacy-policy.html (no content change, only url
> change)
>
>
>
>
>
> >
>
> >
>
> > The survey is available https://forms.gle/bPvEAdRyUcyCbgEc7
>
> >
>
> > --
>
> > Sandro Bonazzola
>
> > MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> > Red Hat EMEA
>
> >
>
> > sbona...@redhat.com
>
> >
>
> >
>
> >
>
> > Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
>
> >
>
> >
>
>
>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA
>
>
>
> sbona...@redhat.com
>
>
>
>
>
>
>
> Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
>
>
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
>
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IJEW35XLR6WBM45DKYMZQ2UOZRWYXHKY/
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QYKXU7P2DNXPGZ2MOBBXVMJYA6DIND2S/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DAISGM2VUZ73SWAU5OALNXM35W7GCAVT/


[ovirt-users] Re: Replica Question

2020-09-28 Thread Jayme
It might be possible to do something similar as described in the
documentation here:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
-- but I'm not sure if oVirt HCI would support it. You might have to roll
out your own GlusterFS storage solution. Someone with more Gluster/HCI
knowledge might know better.

On Mon, Sep 28, 2020 at 1:26 PM C Williams  wrote:

> Jayme,
>
> Thank for getting back with me !
>
> If I wanted to be wasteful with storage, could I start with an initial
> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
> solve split-brains for 4 bricks ?
>
> Thank You For Your Help !
>
> On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
>
>> You can only do HCI in multiple's of 3. You could do a 3 server HCI setup
>> and add the other two servers as compute nodes or you could add a 6th
>> server and expand HCI across all 6
>>
>> On Mon, Sep 28, 2020 at 12:28 PM C Williams 
>> wrote:
>>
>>> Hello,
>>>
>>> We recently received 5 servers. All have about 3 TB of storage.
>>>
>>> I want to deploy an oVirt HCI using as much of my storage and compute
>>> resources as possible.
>>>
>>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>>>
>>> I have deployed replica 3s and know about replica 2 + arbiter -- but an
>>> arbiter would not be applicable here -- since I have equal storage on all
>>> of the planned bricks.
>>>
>>> Thank You For Your Help !!
>>>
>>> C Williams
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CECMD2SWBSBDAFP3TFMMYWTSV3UKU72E/


[ovirt-users] Re: Replica Question

2020-09-28 Thread Jayme
You can only do HCI in multiple's of 3. You could do a 3 server HCI setup
and add the other two servers as compute nodes or you could add a 6th
server and expand HCI across all 6

On Mon, Sep 28, 2020 at 12:28 PM C Williams  wrote:

> Hello,
>
> We recently received 5 servers. All have about 3 TB of storage.
>
> I want to deploy an oVirt HCI using as much of my storage and compute
> resources as possible.
>
> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>
> I have deployed replica 3s and know about replica 2 + arbiter -- but an
> arbiter would not be applicable here -- since I have equal storage on all
> of the planned bricks.
>
> Thank You For Your Help !!
>
> C Williams
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5OIW7L4G6R5YMABQQ67JSS2BHB73QJT/


[ovirt-users] Re: Node 4.4.1 gluster bricks

2020-09-25 Thread Jayme
Assuming you don't care about data on the drive you may just need to use
wipefs on the device i.e. wipefs -a /dev/sdb

On Fri, Sep 25, 2020 at 12:53 PM Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> wrote:

> Hello,
>   how do you manage a gluster host when upgrading a node?
>
> I upgraded/replaced 2 nodes with the new install and can't recreate any
> gluster bricks.
> 1 node I wiped it clean and the other I left the 3 gluster brick drives
> untouched.
>
> If I try to create bricks using the UI on the nodes, I get an internal
> server error. When I try to create a PV from the clean disk, I get device
> excluded by filter.
>
> e.g.
>
> pvcreate /dev/sdb
>
>   Device /dev/sdb excluded by a filter.
>
> pvcreate /dev/mapper/SSDSC2KB240G7R_BTYS83100E0S240AGN
>
>   Device /dev/mapper/SSDSC2KB240G7R_BTYS83100E0S240AGN excluded by a
> filter.
>
>
>
>
> Thanks,
>
>
> Paul S.
>
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/27IUR3H54G2FRS3OJHYR7ZDWDXYULUSO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MUIKIWIP67PYW5PMTMDIY37TWQWMTRRK/


[ovirt-users] Re: Node upgrade to 4.4

2020-09-24 Thread Jayme
Interested to hear how upgrading 4.3 HCI to 4.4 goes. I've been considering
it in my environment but was thinking about moving all VMs off to NFS
storage then rebuilding oVirt on 4.4 and importing.

On Thu, Sep 24, 2020 at 1:45 PM  wrote:

> I am hoping for a miracle like that, too.
>
> In the mean-time I am trying to make sure that all variants of exports and
> imports from *.ova to re-attachable NFS domains work properly, in case I
> have to start from scratch.
>
> HCI upgrades don't get the special love you'd expect after RHV's proud
> announcement that they are now ready to take on Nutanix and vSAN.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C2HVZDUABWKNFN4IJD2ILLQF5E2DUUBU/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZKWFAVDI5L2SGTAY7J4ISNRI25LRCMZ5/


[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-21 Thread Jayme
You could try setting host to maintenance and check stop gluster option,
then re-activate host or try restarting glusterd service on the host

On Mon, Sep 21, 2020 at 2:52 PM Jeremey Wise  wrote:

>
> oVirt engine shows  one of the gluster servers having an issue.  I did a
> graceful shutdown of all three nodes over weekend as I have to move around
> some power connections in prep for UPS.
>
> Came back up.. but
>
> [image: image.png]
>
> And this is reflected in 2 bricks online (should be three for each volume)
> [image: image.png]
>
> Command line shows gluster should be happy.
>
> [root@thor engine]# gluster peer status
> Number of Peers: 2
>
> Hostname: odinst.penguinpages.local
> Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
> State: Peer in Cluster (Connected)
>
> Hostname: medusast.penguinpages.local
> Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
> State: Peer in Cluster (Connected)
> [root@thor engine]#
>
> # All bricks showing online
> [root@thor engine]# gluster volume status
> Status of volume: data
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/data/data  49152 0  Y
> 11001
> Brick odinst.penguinpages.local:/gluster_br
> icks/data/data  49152 0  Y
> 2970
> Brick medusast.penguinpages.local:/gluster_
> bricks/data/data49152 0  Y
> 2646
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume data
>
> --
> There are no active volume tasks
>
> Status of volume: engine
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/engine/engine  49153 0  Y
> 11012
> Brick odinst.penguinpages.local:/gluster_br
> icks/engine/engine  49153 0  Y
> 2982
> Brick medusast.penguinpages.local:/gluster_
> bricks/engine/engine49153 0  Y
> 2657
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume engine
>
> --
> There are no active volume tasks
>
> Status of volume: iso
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/iso/iso49156 49157  Y
> 151426
> Brick odinst.penguinpages.local:/gluster_br
> icks/iso/iso49156 49157  Y
> 69225
> Brick medusast.penguinpages.local:/gluster_
> bricks/iso/iso  49156 49157  Y
> 45018
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume iso
>
> --
> There are no active volume tasks
>
> Status of volume: vmstore
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/vmstore/vmstore49154 0  Y
> 11023
> Brick odinst.penguinpages.local:/gluster_br
> icks/vmstore/vmstore49154 0  Y
> 2993
> Brick medusast.penguinpages.local:/gluster_
> bricks/vmstore/vmstore  49154 0  Y
> 2668
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
>
> Task Status of Volume 

[ovirt-users] Re: Ovirt Host Crashed

2020-09-02 Thread Jayme
I believe if you go into the storage domain in GUI there should be a tab
for vms which should list the vms then you can click the : menu and choose
import

On Wed, Sep 2, 2020 at 9:24 AM Darin Schmidt  wrote:

> I am running this as an all in one system for a test bed at home. The
> system crashed which lead me to have to reinstall the OS (CentOS 8) and I
> imported the data stores but I cannot find any way to import the VM's that
> were in the DATA store. I havent had a chance to backup/export the VMs. I
> havent been able to find anything in the documentation on how to import
> these VMs. Any suggestions or links to what Im looking for?
>
>
>
> I had to create a new DATA store as the option for importing a local data
> store wasnt an option, I assume its because the host was down. THen I
> imported the old data store.
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JLXAUHGSTLL45A4TLLJT3JL4TEINJLZR/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XS5KJTWSX4JEVO46XGMDRWYVU3SGL5KR/


[ovirt-users] Re: How to Backup a VM

2020-08-31 Thread Jayme
Thanks for letting me know, I suspected that might be the case. I’ll make a
note to fix that in the playbook

On Mon, Aug 31, 2020 at 3:57 AM Stefan Wolf  wrote:

> I think, I found the problem.
>
>
>
> It is case sensitive. For the export it is NOT case sensitive but for the
> step "wait for export" it is. I ve changed it and now it seems to be working
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RYFMBHZTJF76RT56HWUK5EV3ETB5QCSV/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JLNHT4MQ5RRQ5MVDATGSELUX27ECTB2E/


[ovirt-users] Re: How to Backup a VM

2020-08-30 Thread Jayme
Interesting I’ve not hit that issue myself. I’d think it must somehow be
related to getting the event status. Is it happening to the same vms every
time? Is there anything different about the vm names or anything that would
set them apart from the others that work?

On Sun, Aug 30, 2020 at 11:56 AM Stefan Wolf  wrote:

> OK,
>
>
>
> I ve run the backup three times .
>
> I still have two machines, where it still fails on TASK [Wait for export]
>
> I think the Problem is not the  timeout, in oVirt engine the export has
> already finished : "
>
> Exporting VM VMName as an OVA to /home/backup/in_progress/VMName.ova on
> Host kvm360"
>
> But [Wait for export] still counts to 1 exit with error and move on to the
> next task
>
>
>
> bye shb
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/W65G6ZUL6C6UJAJI627WVGITGIUUJ2XZ/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CEC5GLU5JF7S7JEMAPSWEJ675UEXR6PT/


[ovirt-users] Re: How to Backup a VM

2020-08-30 Thread Jayme
Also if you look at the blog post linked on github page it has info about
increasing the ansible timeout on ovirt engine machine. This will be
necessary when dealing with large vms that take over 2 hours to export

On Sun, Aug 30, 2020 at 8:52 AM Jayme  wrote:

> You should be able to fix by increasing the timeout variable in main.yml.
> I think the default is pretty low around @ 600 seconds (10 minutes). I have
> mine set for a few hours since I’m dealing with large vms. I’d also
> increase poll interval as well so it’s not checking for completion every 10
> seconds. I set my poll interval to 5 minutes.
>
> I backup many large vms (over 1tb) with this playbook for the past several
> months and never had a problem with it not completing.
>
> On Sun, Aug 30, 2020 at 3:39 AM Stefan Wolf  wrote:
>
>> Hello,
>>
>>
>>
>> >https://github.com/silverorange/ovirt_ansible_backup
>>
>> I am also still using 4.3.
>>
>> In my opinion this is by far the best and easiest solution for disaster
>> recovery. No need to install an appliance, and if there is a need to
>> recover, you can import the ova in every hypervisor - no databases, no
>> dependency.
>>
>>
>>
>> Sometimes I ve issues with "TASK [Wait for export] " sometime it takes to
>> long to export the ova. an I also had the problem, that the export already
>> finished, but it was not realized by the script. In ovirt the export was
>> finished and the filename was renamed from *.tmp to *.ova
>>
>>
>>
>> maybe you have an idea for me.
>>
>>
>>
>> thanks bye
>>
>> ___
>>
>> Users mailing list -- users@ovirt.org
>>
>> To unsubscribe send an email to users-le...@ovirt.org
>>
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>>
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q7TKVK5TL6HT7DQZCY354ICK5J3JRDH4/
>>
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UN2U3U3UD7ZRTJASWLQCAF34ELQSOJFN/


[ovirt-users] Re: How to Backup a VM

2020-08-30 Thread Jayme
You should be able to fix by increasing the timeout variable in main.yml. I
think the default is pretty low around @ 600 seconds (10 minutes). I have
mine set for a few hours since I’m dealing with large vms. I’d also
increase poll interval as well so it’s not checking for completion every 10
seconds. I set my poll interval to 5 minutes.

I backup many large vms (over 1tb) with this playbook for the past several
months and never had a problem with it not completing.

On Sun, Aug 30, 2020 at 3:39 AM Stefan Wolf  wrote:

> Hello,
>
>
>
> >https://github.com/silverorange/ovirt_ansible_backup
>
> I am also still using 4.3.
>
> In my opinion this is by far the best and easiest solution for disaster
> recovery. No need to install an appliance, and if there is a need to
> recover, you can import the ova in every hypervisor - no databases, no
> dependency.
>
>
>
> Sometimes I ve issues with "TASK [Wait for export] " sometime it takes to
> long to export the ova. an I also had the problem, that the export already
> finished, but it was not realized by the script. In ovirt the export was
> finished and the filename was renamed from *.tmp to *.ova
>
>
>
> maybe you have an idea for me.
>
>
>
> thanks bye
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q7TKVK5TL6HT7DQZCY354ICK5J3JRDH4/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FSD7CVYYHG2LLOJGBJFYSMY2DXOFGBUZ/


[ovirt-users] Re: How to Backup a VM

2020-08-29 Thread Jayme
Probably the easiest way is to export the VM as OVA. The OVA format is a
single file which includes the entire VM image along with the config. You
can import it back into oVirt easily as well. You can do this from the GUI
on a running VM and export to OVA without bringing the VM down. The export
process will handle the creation and deletion of the snapshot.

You can export to OVA to a directory located on one of the hosts, this
directory could be a NFS mount on an external storage server if you want.

The problem with export to OVA is that you can't put it on a schedule and
it is mostly a manual process. You can however initiate it with Ansible.

A little while ago I actually wrote an ansible playbook to backup multiple
VMs on a schedule. It was wrote for oVirt 4.3, I have not had to time to
test it with 4.4 yet

https://github.com/silverorange/ovirt_ansible_backup

On Sat, Aug 29, 2020 at 10:14 AM Stefan Wolf  wrote:

> Hello to all
>
> I try to backup a normal VM. But It seems that I don't really understand
> the concept. At first I found the possibility to backup with the api
> https://www.ovirt.org/documentation/administration_guide/#Setting_a_storage_domain_to_be_a_backup_domain_backup_domain
> .
> Create a snapshot of the VM, finding the ID of the snapshot and the
> configuration of the VM makes sense to me.
> But at this point, I would download the config an the snapshot and put it
> to my backup storage. And not create a new VM attach the disk and run a
> backup with backup programm. And for restoring do the sam way backwards.
>
> If i look at other project, there seems do be a way to download the
> snapshot and configfile, or am I wrong?
> Maybe someone can explain to me why I should use additional software to
> install in an additional machine. Or even better someone can explain to me
> how I don't have to use additional backup software.
>
> And to the same topic backup.
> There is in the documentation the possibility to set up a backup storage
> It is nearly the same, create a snapshot, or clone the machine and export
> it to backup storage
> > Export the new virtual machine to a backup domain. See Exporting a
> Virtual Machine to a Data Domain in the Virtual Machine Management Guide.
> Sadly there is just writen what to do, not how, the link points to 404
> page. maybe someone can explain to me how to use backup storage
>
> thank you very much
>
> shb
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/COR6VIV477XUFDKJAVEO2ODCESVENKLV/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CD4ZSSXPKULSF74TJNAS2USFE7YTIH2R/


[ovirt-users] Re: Incremental VM backups

2020-08-19 Thread Jayme
Vprotect can do some form of incremental backup of ovirt vms. At least on
4.3 I’m not sure where they’re at for 4.4 support. Worth checking out, free
for 10 vms

On Wed, Aug 19, 2020 at 7:03 AM Kevin Doyle 
wrote:

> Hi
>
> I am looking at ways to backup VM's, ideally that support incremental
> backups. I have found a couple of python scripts that snapshot a VM and
> back it up but not incremental. The question is what do you use to backup
> the VM's ? (both linux and windows)
>
>
>
> Thanks
>
> Kevin
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KU2FI6KCAQGTLE46YEXFPJY7KQTTAQYN/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GK3NKES3ISMHSNN3QHW65IZ2I3ZIE6LD/


[ovirt-users] Re: HA Storage options

2020-08-17 Thread Jayme
I think you are perhaps overthinking a tad. Glusterfs is a fine solution
but it has had a rocky road. It would not be my first suggestion if you are
seeking high level write performance although that has been improving and
can be fine tuned. Instability at least in the past was mostly centered
around cluster upgrades. Untouched gluster is solid and practically takes
care of itself. There are definitely more eggs in one basket when dealing
with hyperconverged in general.

AFAIK ovirt does not support drbd storage nor ceph although I think ceph
May be planned in true future. I’m not aware of any plans to abandon
glusterfs.

The best piece of advice I could offer from experience running hci over the
past few years is to not rush to updating to the latest release right away.

On Mon, Aug 17, 2020 at 8:39 PM David White via Users 
wrote:

> Hi,
> I started an email thread a couple months ago, and felt like I got some
> great feedback and suggestions on how to best setup an oVirt cluster.
> Thanks for your responses thus far.
> My goal is to take a total of 3-4 servers that I can use for *both* the
> storage *and* the virtualization, and I want both to be highly available.
>
> You guys told me about oVirt Hyperconverged with Gluster, and that seemed
> like a great option. However, I'm concerned that this may not actually be
> the best approach. I've spoken with multiple people at Red Hat who I have a
> relationship with (outside of the context of the project I'm working on
> here), and all of them have indicated to me that Gluster is being
> deprecated, and that most of the engineering focus these days is on Ceph. I
> was also told by a Solutions Architect who has extensive experience with
> RHV that the hyperconverged clusters he used to build would always give him
> problems.
>
> Does oVirt support DRBD or Ceph storage? From what I can find, I think
> that the answer to both of those is, sadly, no.
>
> So now I'm thinking about switching gears, and going with iSCSI instead.
> But I'm still trying to think about the best way to replicate the storage,
> and possibly use multipathing so that it will be HA for the VMs that rely
> on it.
>
> Has anyone else experienced problems with the Gluster hyperconverged
> solution?
> Am I overthinking this whole thing, and am I being too paranoid?
> Is it possible to setup some sort of software-RAID with multiple iSCSI
> targets?
>
> As an aside, I now have a machine that I was planning to begin doing some
> testing and practicing with.
> Previous to my conversations with the folks at Red Hat, I was planning on
> doing some initial testing and config with this server before purchasing
> another 2-3 servers to build the hyperconverged cluster.
>
>
> Sent with ProtonMail  Secure Email.
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MHYBWFGV74OUGQJVBNPK3D4HM2FQPMYC/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBTHXSUX7GOWK5NYPCCEUW2BFYZIYVBX/


[ovirt-users] Re: where is error log for OVA import

2020-07-28 Thread Jayme
Check engine.log in /var/log/ovirt-engine on the engine sever/vm

On Tue, Jul 28, 2020 at 7:16 PM Philip Brown  wrote:

> I just tried to import an OVA file.
> The GUI status mentions that things seem to go along fairly happily..
> it mentions that it creates a disk for it
> but then eventually just says
>
> "failed to import VM x into datacenter Default"
> with zero explanation.
>
> Isnt there a log file or something I can check, somewhere, to find out
> what the problem is?
>
>
> --
> Philip Brown| Sr. Linux System Administrator | Medata, Inc.
> 5 Peters Canyon Rd Suite 250
> Irvine CA 92606
> Office 714.918.1310| Fax 714.918.1325
> pbr...@medata.com| www.medata.com
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5UDCY3OHJBL7VEYUWHAZQEQHFZ6SOIK6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PQAYC3XPOYZG4G26PIETNBPAUDRET4VO/


[ovirt-users] Re: oVirt install questions

2020-07-19 Thread Jayme
You would setup three servers first in hyperconverged using either replica
3 or replica 3 arbiter 1 then add your fourth host afterward as a compute
only host that can host vms but does not participate in glusterfs storage.

On Sun, Jul 19, 2020 at 3:12 PM David White via Users 
wrote:

> Thank you.
> So to make sure I understand what you're saying, it sounds like if I need
> 4 nodes (or more), I should NOT do a "hyperconverged" installation, but
> should instead prepare Gluster separately from the oVirt Manager
> installation. Do I understand this correctly?
>
> If that is the case, can I still use some of the servers for dual purposes
> (Gluster + oVirt Manager)? I'm most likely going to need more servers for
> the storage than I will need for the RAM & CPU, which is a little bit
> opposite of what you wrote (using 3 servers for Gluster and adding
> additional nodes for RAM & CPU).
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Sunday, July 19, 2020 9:57 AM, Strahil Nikolov via Users <
> users@ovirt.org> wrote:
>
> > Hi David,
> >
>
> > it's a little bit different.
> >
>
> > Ovirt supports 'replica 3' (3 directories hsot the same content) or
> 'replica 3 arbiter 1' (2 directories host same data, third directory
> contains metadata to prevent split brain situations) volumes.
> >
>
> > If you have 'replica 3' it is smart to keep the data on separate hosts,
> although you can keep it on the same host (but then you should use no
> replica and oVirt's Single node setup).
> >
>
> > When you extend , yoou need to add bricks (fancy name for a directory)
> in the x3 count.
> >
>
> > If you wish that you want to use 5 nodes, you can go with 'replica 3
> arbiter 1' volume, where ServerA & ServerB host data and ServerC host only
> metadata (arbiter). Then you can extend and for example ServerC can host
> again metadata while ServerD & ServerE host data for the second replica set.
> >
>
> > You can even use only 3 servers for Gluster , while much more systems as
> ovirt nodes (CPU & RAM) to host VMs.
> > In case of a 4 node setup - 3 hosts have the gluster data and the 4th -
> is not part of ths gluster, just hosting VMs.
> >
>
> > Best Regards,
> > Strahil Nikolov
> >
>
> > На 19 юли 2020 г. 15:25:10 GMT+03:00, David White via Users
> users@ovirt.org написа:
> >
>
> > > Thanks again for explaining all of this to me.
> > > Much appreciated.
> > > Regarding the hyperconverged environment,
> > > reviewing
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
> ,
> > > it appears to state that you need, exactly, 3 physical servers.
> > > Is it possible to run a hyperconverged environment with more than 3
> > > physical servers?
> > > Because of the way that the gluster triple-redundancy works, I knew
> > > that I would need to size all 3 physical servers' SSD drives to store
> > > 100% of the data, but there's a possibility that 1 particular (future)
> > > customer is going to need about 10TB of disk space.
> > > For that reason, I'm thinking about what it would look like to have 4
> > > or even 5 physical servers in order to increase the total amount of
> > > disk space made available to oVirt as a whole. And then from there, I
> > > would of course setup a number of virtual disks that I would attach
> > > back to that customer's VM.
> > > So to recap, if I were to have a 5-node Gluster Hyperconverged
> > > environment, I'm hoping that the data would still only be required to
> > > replicate across 3 nodes. Does this make sense? Is this how data
> > > replication works? Almost like a RAID -- add more drives, and the RAID
> > > gets expanded?
> > > Sent with ProtonMail Secure Email.
> > > ‐‐‐ Original Message ‐‐‐
> > > On Tuesday, June 23, 2020 4:41 PM, Jayme jay...@gmail.com wrote:
> > >
>
> > > > Yes this is the point of hyperconverged. You only need three hosts to
> > > > setup a proper hci cluster. I would recommend ssds for gluster
> storage.
> > > > You could get away with non raid to save money since you can do
> replica
> > > > three with gluster meaning your data is fully replicated across all
> > > > three hosts.
> > >
>
> > > > On Tue, Jun 23, 2020 at 5:17 PM David White via Users
> > > > users@ovirt.org wrote:
> > >
>
> > > > > Thanks.
> > > > > I've only bee

[ovirt-users] Re: mixed hyperconverged?

2020-07-15 Thread Jayme
Your other hosts that aren’t participating in gluster storage would just
mount the gluster storage domains.

On Wed, Jul 15, 2020 at 6:44 PM Philip Brown  wrote:

> Hmm...
>
>
> Are you then saying, that YES, all host nodes need to be able to talk to
> the glusterfs filesystem?
>
>
> on a related note, I'd like to have as few nodes actually holding
> glusterfs data as possible, since I want that data on SSD.
> Rather than multiple "replication set" hosts, and one arbiter.. is it
> instead possible to have only 2 replication set hosts, and multiple
> (arbitrariliy many) arbiter nodes?
>
>
> - Original Message -
> From: "Strahil Nikolov" 
> To: "users" , "Philip Brown" 
> Sent: Wednesday, July 15, 2020 1:59:40 PM
> Subject: Re: [ovirt-users] Re: mixed hyperconverged?
>
> You can use   a distributed replicated volume of type 'replica 3 arbiter
> 1'.
> For example, NodeA  and NodeB are contain  replica  set 1  with NodeC as
> their arbiter and NodeD and NodeE as the second  replica set  2  with NodeC
> as thir arbiter also.
>
> In such case you got only 2 copies of a single shard, but you are fully
> "supported" from gluster perspective.
> Also, all  hosts can have an external storage like  your  NAS.
>
> Best Regards,
> Strahil Nikolov
>
> На 15 юли 2020 г. 21:11:34 GMT+03:00, Philip  Brown 
> написа:
> >arg. when I said "add 2 more nodes that arent part of the cluster", I
> >meant,
> >"part of the glusterfs cluster".
> >
> >or at minimum, maybe some kind of client-only setup, if they need
> >access?
> >
> >
> >- Original Message -
> >From: "Philip Brown" 
> >To: "users" 
> >Sent: Wednesday, July 15, 2020 10:37:48 AM
> >Subject: [ovirt-users] mixed hyperconverged?
> >
> >I'm thinking of doing an SSD based hyperconverged setup (for 4.3), but
> >am wondering about certain design issues.
> >
> >seems like the optimal number is 3 nodes for the glusterfs.
> >but.. I want 5 host nodes, not 3
> >and I want the main storage for VMs to be separate iSCSI NAS boxes.
> >Is it possible to have 3 nodes be the hyperconverged stuff.. but then
> >add in 2 "regular" nodes, that dont store anything and arent part of
> >the cluster?
> >
> >is it required to be part of the gluster cluster, to also be part of
> >the ovirt cluster, if thats where the hosted-engine lives?
> >or can I just have the hosted engine be switchable between the 3 nodes,
> >and the other 2 be VM-only hosts?
> >
> >Any recommendations here?
> >
> >I dont what 5 way replication going on. Nor do I want to have to pay
> >for large SSDs on all my host nodes.
> >(Im planning to run them with the ovirt 3.4 node image)
> >
> >
> >
> >--
> >Philip Brown| Sr. Linux System Administrator | Medata, Inc.
> >5 Peters Canyon Rd Suite 250
> >Irvine CA 92606
> >Office 714.918.1310| Fax 714.918.1325
> >pbr...@medata.com| www.medata.com
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZMXMGRGOMYE4UIQH32R6GCCHTABTGSX/
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RINQKWPRCQD5KYPFJYA75HFIUVJVTZXC/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/46IWO6CTOGJVZN2M6DMNB3AOX6B347S3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GGWUV4PJUL2HL6P6IW6PMVGNQZF5C35Z/


[ovirt-users] Re: how to get ovirt 4.3 documentation?

2020-07-13 Thread Jayme
Personally I find the rhev documentation much more complete:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/

On Mon, Jul 13, 2020 at 6:17 PM Philip Brown  wrote:

> I find it odd that the ovirt website allows to see older version RELEASE
> NOTES...
> but doesnt seem to give links to general documentation for older versions.
> For example, if you read
>
> https://www.ovirt.org/release/4.3.10/
> it says,
>
> "For complete installation, administration, and usage instructions, see
> the oVirt Documentation."
>
> but that links to the general docs page at
> https://www.ovirt.org/documentation/
>
> It does NOT link to any ovirt 4.3 docs, which is what I actually need
>
>
>
> --
> Philip Brown| Sr. Linux System Administrator | Medata, Inc.
> 5 Peters Canyon Rd Suite 250
> Irvine CA 92606
> Office 714.918.1310| Fax 714.918.1325
> pbr...@medata.com| www.medata.com
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZWLU75AKAJNT7T7C644ESHVINYIH7OQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDMCD45NU3MMD42YAOGCCSHRO3VXE27E/


[ovirt-users] Re: Ovirt 4.3.10 Glusterfs SSD slow performance over 10GE

2020-07-07 Thread Jayme
Emy,

I was wondering how much if any improvement I'd see with Gluster storage
moving to oVirt 4.4/CentOS 8x (but have not made the switch yet myself).
You should keep in mind that your Perc controllers aren't supported by
CentOS 8 out of the box, they dropped support for many older controllers.
You should still be able to get it to work using a driver update disk
during install. See: https://forums.centos.org/viewtopic.php?t=71862

Either way, this is good to know ahead of time as to limit surprises!

- Jayme

On Tue, Jul 7, 2020 at 10:22 AM shadow emy  wrote:

> i found the problem.
> The kernel version in Centos 7.8  with version 3.x.x is really too old and
> does not know how to handle fine new SSD disks or RAID Controllers with
> latest BIOS Updates applied.
>
> Booting and Archlinux latest iso image with kernel 5.7.6 or a Centos 8.2
> with kernel 4.18 increased the performance at the right values.
> I run  multiple dd tests on the above images using bs of 10, 100 and 1000M
> and  had aconstant write speed of  1.1GB/s.This is the expected value for 2
> SSD in RAID 0.
>
> I had also enabled  cache settings on the Dell Perc 710 Raid controller :
> Write Cache set to "Write Back", disk cache set to "Enabled", read cache to
> "Read Ahead".For those who think "Write back" is a problem and the data
> might be corrupted, this should be ok now with the latest filesystem xfs or
> ext4 , that can recover in case of power loss.To make data safer, i also
> have a Raid cache battery and UPS redundancy.
>
> Now i know i must run ovirt 4.4 with Centos 8.2 for good performance.
>  I saw that Upgrading from 4.3 to 4.4 is not an easy task, multiple fails
> and not quite straight forward(i also have hosted engine on the shared
> Gluster Storage which makes this ipgrade even more difficult), but
> eventually i think i can get it running.
>
> Thanks,
> Emy
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZOFENYMPKXC6Z6MHOFFAUPPQCUFDNKHO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OSDGNRHS25GUZG3RHIEHIZX66UYMGJIV/


[ovirt-users] Re: Ovirt 4.3.10 Glusterfs SSD slow performance over 10GE

2020-06-28 Thread Jayme
I’ve tried various methods to improve gluster performance on similar
hardware and never had much luck. Small file workloads were particularly
troublesome. I ended up switching high performance vms to nfs storage and
performance with nfs improved greatly in my use case.

On Sun, Jun 28, 2020 at 6:42 PM shadow emy  wrote:

> > Hello ,
>
> Hello and thank you for the reply.Bellow are the answers to your questions.
> >
> > Let me ask some questions:
> > 1. What is the scheduler for your PV ?
>
>
> On the Raid Controller device where the SSD disks are in Raid 0 (device
> sda) it is set to "deadline". But on the lvm volume logical volume dm-7,
> where the logical block is set for "data" volunr it is set to none.(i think
> this is ok )
>
>
> [root@host1 ~]# ls -al /dev/mapper/gluster_vg_sd
> v_data ter_l
> lrwxrwxrwx. 1 root root 7 Jun 28 14:14 /dev/mapper/gluster_v
> g_sda3-gluster_lv_data -> ../dm-7
> [root@host1 ~]# cat /sys/block/dm-7/queue/scheduler
> none
> root@host1:~[root@host1 ~]# cat /sys/block/dm-7/queue/schedu
> [root@host1 ~]# cat /sys/block/sda/queue/scheduler
> noop [deadline] cfq
>
>
>
> > 2. Have you aligned your PV during the setup 'pvcreate
> --dataalignment alignment_value
> > device'
>
>
> I did not made other alignment then the default.Bellow are the partitions
> on /dev/sda.
> Can i enable partition alignment now, if yes how ?
>
> sfdisk -d /dev/sda
> # partition table of /dev/sda
> unit: sectors
>
> /dev/sda1 : start= 2048, size=   487424, Id=83, bootable
> /dev/sda2 : start=   489472, size= 95731712, Id=8e
> /dev/sda3 : start= 96221184, size=3808675840, Id=83
> /dev/sda4 : start=0, size=0, Id= 0
>
>
>
> > 3. What is your tuned profile ? Do you use rhgs-random-io from
> > the
> ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/red...
> > ?
>
> My tuned active profile is virtual-host
>
> Current active profile: virtual-host
>
>  No i dont use any of the rhgs-random-io profiles
>
> > 4. What is the output of "xfs_info /path/to/your/gluster/brick" ?
>
> xfs_info /gluster_bricks/data
> meta-data=/dev/mapper/gluster_vg_sda3-gluster_lv_data isize=
> 512agcount=32, agsize=6553600 blks
>  =   sectsz=512   attr=2, projid
> 32bit=1
>  =   crc=1finobt=0 spino
> des=0
> data =   bsize=4096   blocks=2097152
> 00, imaxpct=25
>  =   sunit=64 swidth=64 blks
> naming   =version 2  bsize=8192   ascii-ci=0 fty
> pe=1
> log  =internal   bsize=4096   blocks=102400,
>  version=2
>  =   sectsz=512   sunit=64 blks,
>  lazy-count=1
> realtime =none   extsz=4096   blocks=0, rtex
> tents=0
>
> > 5. Are you using Jumbo Frames ? Does your infra support them?
> > Usually MTU of 9k is standard, but some switches and NICs support up to
> 16k.
> >
>
> Unfortunately  I can not enable MTU to 9000 and Jumbo Frames on these
> Cisco SG350X switches to specific ports.The switches  dont suport Jumbo
> Frames enable  to a single port, only on all ports .
> I have others devices connected to the switches on the remaining 48 ports
> that have  1Gb/s.
>
> > All the options for "optimize for virt" are located
> > at /var/lib/glusterd/groups/virt on each gluster node.
>
> I have already looked  previously at that file, but not all the volume
> settings  that are set by "Optime for Virt Store" are stored there.
> For example  "Optimize for Virt Store " sets network.remote.dio   to
> disable and in the glusterd/groups/virt is set to enabled.Or
> cluster.granular-entry-heal: enable is not present there, bit it is set by
> "Optimize for Virt Store"
>
> >
> > Best Regards,
> > Strahil Nikolov
> >
> >
> >
> >
> > В неделя, 28 юни 2020 г., 22:13:09 Гринуич+3, jury cat  gmail.com
> > написа:
> >
> >
> >
> >
> >
> > Hello all,
> >
> > I am using Ovirt 4.3.10 on Centos 7.8 with glusterfs 6.9 .
> > My Gluster setup is of 3 hosts in replica 3 (2 hosts + 1 arbiter).
> > All the 3 hosts are Dell R720  with Perc Raid Controller H710 mini(that
> has maximim
> > throughtout 6Gbs)  and  with 2×1TB samsumg SSD in RAID 0. The volume is
> partitioned using
> > LVM thin provision and formated XFS.
> > The hosts have separate 10GE network cards for storage traffic.
> > The Gluster Network is connected to this 10GE network cards and is
> mounted using Fuse
> > Glusterfs(NFS is disabled).Also Migration Network is activated on the
> same storage
> > network.
> >
> >
> > The problem is that the 10GE network is not used at full potential by
> the Gluster.
> > If i do live Migration of Vms i can see speeds of 7GB/s ~ 9GB/s.
> > The same network tests using iperf3 reported 9.9GB/s ,  these exluding
> the network setup
> > as a bottleneck(i will not paste all the iperf3 tests here for now).
> > I did not enable all the Volume options  from "Optimize for Virt Store",
> because
> > of the bug that cant set volume  

[ovirt-users] Re: oVirt install questions

2020-06-23 Thread Jayme
Yes this is the point of hyperconverged. You only need three hosts to setup
a proper hci cluster. I would recommend ssds for gluster storage. You could
get away with non raid to save money since you can do replica three with
gluster meaning your data is fully replicated across all three hosts.


On Tue, Jun 23, 2020 at 5:17 PM David White via Users 
wrote:

> Thanks.
> I've only been considering SSD drives for storage, as that is what I
> currently have in the cloud.
>
> I think I've seen some things in the documents about oVirt and gluster
> hyperconverged.
> Is it possible to run oVirt and Gluster together on the same hardware? So
> 3 physical hosts would run CentOS or something, and I would install oVirt
> Node + Gluster onto the same base host OS? If so, then I could probably
> make that fit into my budget.
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Monday, June 22, 2020 1:02 PM, Strahil Nikolov via Users <
> users@ovirt.org> wrote:
>
> > Hey David,
> >
>
> > keep in mind that you need some big NICs.
> > I started my oVirt lab with 1 Gbit NIC and later added 4 dual-port 1
> Gbit NICs and I had to create multiple gluster volumes and multiple storage
> domains.
> > Yet, windows VMs cannot use software raid for boot devices, thus it's a
> pain in the @$$.
> > I think that optimal is to have several 10Gbit NICs (at least 1 for
> gluster and 1 for oVirt live migration).
> > Also, NVMEs can be used as lvm cache for spinning disks.
> >
>
> > Best Regards,
> > Strahil Nikolov
> >
>
> > На 22 юни 2020 г. 18:50:01 GMT+03:00, David White
> dmwhite...@protonmail.com написа:
> >
>
> > > > For migration between hosts you need a shared storage. SAN, Gluster,
> > > > CEPH, NFS, iSCSI are among the ones already supported (CEPH is a
> little
> > > > bit experimental).
> > >
>
> > > Sounds like I'll be using NFS or Gluster after all.
> > > Thank you.
> > >
>
> > > > The engine is just a management layer. KVM/qemu has that option a
> > > > long time ago, yet it's some manual work to do it.
> > > > Yeah, this environment that I'm building is expected to grow over
> time
> > > > (although that growth could go slowly), so I'm trying to architect
> > > > things properly now to make future growth easier to deal with. I'm
> also
> > > > trying to balance availability concerns with budget constraints
> > > > starting out.
> > >
>
> > > Given that NFS would also be a single point of failure, I'll probably
> > > go with Gluster, as long as I can fit the storage requirements into the
> > > overall budget.
> > > Sent with ProtonMail Secure Email.
> > > ‐‐‐ Original Message ‐‐‐
> > > On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users
> > > users@ovirt.org wrote:
> > >
>
> > > > На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via
> > > > usersus...@ovirt.org написа:
> > >
>
> > > > > Thank you and Strahil for your responses.
> > > > > They were both very helpful.
> > >
>
> > > > > > I think a hosted engine installation VM wants 16GB RAM configured
> > > > > > though I've built older versions with 8GB RAM.
> > > > > > For modern VMs CentOS8 x86_64 recommends at least 2GB for a host.
> > > > > > CentOS7 was OK with 1, CentOS6 maybe 512K.
> > > > > > The tendency is always increasing with updated OS versions.
> > >
>
> > > > > Ok, so to clarify my question a little bit, I'm trying to figure
> > > > > out
> > > >
>
> > > > > how much RAM I would need to reserve for the host OS (or oVirt
> > > > > Node).
> > > >
>
> > > > > I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps
> > > > > that would suffice?
> > > > > And then as you noted, I would need to plan to give the engine
> > > > > 16GB.
> > >
>
> > > > I run my engine on 4Gb or RAM, but i have no more than 20 VMs, the
> > > > larger the setup - the more ram for the engine is needed.
> > >
>
> > > > > > My minimum ovirt systems were mostly 48GB 16core, but most are
> > > > > > now
> > > >
>
> > > > > > 128GB 24core or more.
> > >
>
> > > > > But this is the total amount of physical RAM in your systems,
> > > > > correct?
> > > >
>
> > > > > Not the amount that you've reserved for your host OS?I've spec'd
> > > > > out
> > > >
>
> > > > > some hardware, and am probably looking at purchasing two PowerEdge
> > > > > R820's to start, each with 64GB RAM and 32 cores.
> > >
>
> > > > > > While ovirt can do what you would like it to do concerning a
> > > > > > single
> > > >
>
> > > > > > user interface, but with what you listed,
> > > > > > you're probably better off with just plain KVM/qemu and using
> > > > > > virt-manager for the interface.
> > >
>
> > > > > Can you migrate VMs from 1 host to another with virt-manager, and
> > > > > can
> > > >
>
> > > > > you take snapshots?
> > > > > If those two features aren't supported by virt-manager, then that
> > > > > would
> > > >
>
> > > > > almost certainly be a deal breaker.
> > >
>
> > > > The engine is just a management layer. KVM/qemu has that option a
> > > > 

[ovirt-users] Re: What happens when shared storage is down?

2020-06-10 Thread Jayme
This is of course not recommended but there has been times where I have
lost network access to storage or storage sever while vms were running.
They paused and came back up when storage was available again without
causing any problems. This doesn’t mean it’s 100% safe but from my
experience it has not caused any issue.

Personally I would shutdown vms or live migrate the disk to secondary
storage then migrate it back after the updates are performed.

On Wed, Jun 10, 2020 at 2:22 AM Vinícius Ferrão via Users 
wrote:

>
>
> > On 7 Jun 2020, at 08:34, Strahil Nikolov  wrote:
> >
> >
> >
> > На 7 юни 2020 г. 1:58:27 GMT+03:00, "Vinícius Ferrão via Users" <
> users@ovirt.org> написа:
> >> Hello,
> >>
> >> This is a pretty vague and difficult question to answer. But what
> >> happens if the shared storage holding the VMs is down or unavailable
> >> for a period of time?
> > Once  a  pending I/O is blocked, libvirt will pause the VM .
> >
> >> I’m aware that a longer timeout may put the VMs on pause state, but how
> >> this is handled? Is it a time limit? Requests limit? Who manages this?
> > You got sanlock.service that notifies the engine when a storage domain
> is unaccessible for  mode than 60s.
> >
> > Libvirt also will pause  a  VM when a pending I/O cannot be done.
> >
> >> In an event of self recovery of the storage backend what happens next?
> > Usually the engine should resume the VM,  and from application
> perspective nothing has happened.
>
> Hmm thanks Strahil. I was thinking to upgrade the storage backend of one
> of my oVirt clusters without powering off the VM’s, just to be lazy.
>
> The storage does not have dual controllers, so downtime is needed. I’m
> trying to understand what happens so I can evaluate this update without
> turning off the VMs.
>
> >> Manual intervention is required? The VMs may be down or they just
> >> continue to run? It depends on the guest OS running like in XenServer
> >> where different scenarios may happen?
> >>
> >> I’ve looked here:
> >> https://www.ovirt.org/documentation/admin-guide/chap-Storage.html but
> >> there’s nothing that goes about this question.
> >>
> >> Thanks,
> >>
> >> Sent from my iPhone
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BVZAG2V3KBB364U5VBRCBIU42LJNGCI6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TKGXPENNO7RTW3S7G3GZODMNOHPULEMR/


[ovirt-users] Re: ovirt vm backup tool

2020-06-09 Thread Jayme
 I wrote a simple un-official ansible playbook to backup full VMs here:
https://github.com/silverorange/ovirt_ansible_backup -- it works great for
my use case, but it is more geared toward smaller environments.

For commercial software I'd take a look at vProtect (it's free for up to 10
VMs)

I've heard some rumblings about incremental backup support in 4.4 as some
others have suggested but don't have much knowledge on the subject.



On Tue, Jun 9, 2020 at 1:16 PM Gianluca Cecchi 
wrote:

> On Tue, Jun 9, 2020 at 5:24 PM Shani Leviim  wrote:
>
>> Hi Shashank,
>> You can use the new incremental backup feature, which available for a
>> tech preview for ovirt 4.4.
>>
>
> It seems it is not so; see this thread and errors received in 4.4 and
> latest answer from Nir:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/thread/CWLMCHTSWDNOLFUPPLOU7ORIVKHWD5GM/
>
> I too hoped to be able to test in 4.4. without going to master...
>
> Gianluca
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JFIDWTBW56OOAVOV7HHNEU2QVGRNXG3W/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FRCQPCBDU2HC3IUXTBZWFKDTG2GPFAL2/


[ovirt-users] Re: basic infra and glusterfs sizing question

2020-05-29 Thread Jayme
Also, I can't think of the limit off the top of my head. I believe it's
either 75 or 100Gb. If the engine volume is set any lower the installation
will fail. There is a minimum size requirement.

On Fri, May 29, 2020 at 12:09 PM Jayme  wrote:

> Regarding Gluster question. The volumes would be provisioned with LVM on
> the same block device. I believe 100Gb is recommended for the engine
> volume. The other volumes such as data would be created on another logical
> volume and you can use up the rest of the available space there. Ex. 100gb
> engine, 500Gb data and 400Gb vmstore.
>
> Data domains are basically the same now, in the past there used to be
> different domain types such as ISO domains which are deprecated. You don't
> really need any more than engine volume and data volume.  You could have a
> volume for storing ISOs if you wanted to. You could have a separate volume
> for OS disks and another volume for data disks which would give you more
> flexibility for backups (so that you could backup data disks but not OS for
> example).
>
> On Fri, May 29, 2020 at 10:29 AM Jiří Sléžka  wrote:
>
>> Hello,
>>
>> I am just curious if basic gluster HCI layout which is suggested in
>> cockpit has some deeper meaning.
>>
>> There are suggested 3 volumes
>>
>> * engine - it is clear, it is the volume where engine vm is running.
>> When this vm is 51GB big how small could this volume be? I have 1TB SSD
>> storage and I would like utilize it as much as possible. Could I create
>> this volume as small as this vm is? Is it safe for example for future
>> upgrades?
>>
>> * vmstore - it make sense it is a space for all other vms running in
>> oVirt. Right?
>>
>> * data - which purpose has this volume? other data like for example
>> ISOs? Direct disks?
>>
>> Another infra question... or maybe request for comment
>>
>> I have small amount of public ipv4 addresses in my housing (but I have
>> own switches there so I can create vlans and separate internal traffic).
>> I can access only these public ipv4 addresses directly. I would like to
>> conserve these addressess as much as possible so what is the best
>> approach in your opinion?
>>
>> * Install all hosts and HE with management network on private addressess
>>
>>   * have small router (hw appliance with for example LEDE) which will
>> utilize one ipv4 address and will do NAT and vpn for accessing my
>> internals vlans.
>> + looks like simple approach to me
>> - single point of failure in this router (not really - just in case
>> oVirt is badly broken and I need to access internal vlans to recover it)
>>
>>   * have this router as virtual appliance inside oVirt (something like
>> pfSense for example)
>> + no need hw router
>> + not sure but I could probably configure vrrp redundancy
>> - still single point of failure like in first case
>>
>>   * any other approach? Could ovn help here somehow?
>>
>> * Install all hosts and HE with public addresses :-)
>>   + access to all hosts directly
>>   - 3 node HCI cluster uses 4 public ip addressess
>>
>> Thanks for your opinions
>>
>> Cheers,
>>
>> Jiri
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LIFQQHTFVTS6KICR5MTRPGO5CH7QDLK7/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z232GYXPCKDAH2FCYSJQSDTD7GL6CUT7/


[ovirt-users] Re: basic infra and glusterfs sizing question

2020-05-29 Thread Jayme
Regarding Gluster question. The volumes would be provisioned with LVM on
the same block device. I believe 100Gb is recommended for the engine
volume. The other volumes such as data would be created on another logical
volume and you can use up the rest of the available space there. Ex. 100gb
engine, 500Gb data and 400Gb vmstore.

Data domains are basically the same now, in the past there used to be
different domain types such as ISO domains which are deprecated. You don't
really need any more than engine volume and data volume.  You could have a
volume for storing ISOs if you wanted to. You could have a separate volume
for OS disks and another volume for data disks which would give you more
flexibility for backups (so that you could backup data disks but not OS for
example).

On Fri, May 29, 2020 at 10:29 AM Jiří Sléžka  wrote:

> Hello,
>
> I am just curious if basic gluster HCI layout which is suggested in
> cockpit has some deeper meaning.
>
> There are suggested 3 volumes
>
> * engine - it is clear, it is the volume where engine vm is running.
> When this vm is 51GB big how small could this volume be? I have 1TB SSD
> storage and I would like utilize it as much as possible. Could I create
> this volume as small as this vm is? Is it safe for example for future
> upgrades?
>
> * vmstore - it make sense it is a space for all other vms running in
> oVirt. Right?
>
> * data - which purpose has this volume? other data like for example
> ISOs? Direct disks?
>
> Another infra question... or maybe request for comment
>
> I have small amount of public ipv4 addresses in my housing (but I have
> own switches there so I can create vlans and separate internal traffic).
> I can access only these public ipv4 addresses directly. I would like to
> conserve these addressess as much as possible so what is the best
> approach in your opinion?
>
> * Install all hosts and HE with management network on private addressess
>
>   * have small router (hw appliance with for example LEDE) which will
> utilize one ipv4 address and will do NAT and vpn for accessing my
> internals vlans.
> + looks like simple approach to me
> - single point of failure in this router (not really - just in case
> oVirt is badly broken and I need to access internal vlans to recover it)
>
>   * have this router as virtual appliance inside oVirt (something like
> pfSense for example)
> + no need hw router
> + not sure but I could probably configure vrrp redundancy
> - still single point of failure like in first case
>
>   * any other approach? Could ovn help here somehow?
>
> * Install all hosts and HE with public addresses :-)
>   + access to all hosts directly
>   - 3 node HCI cluster uses 4 public ip addressess
>
> Thanks for your opinions
>
> Cheers,
>
> Jiri
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LIFQQHTFVTS6KICR5MTRPGO5CH7QDLK7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FINP3OVMABC7VG4OY7TINSK4OMLHCBL2/


[ovirt-users] Re: ovirt-websocket-proxy errors when trying noVNC

2020-05-28 Thread Jayme
Here is the bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1832210

On Thu, May 28, 2020 at 8:23 AM Jayme  wrote:

> If it’s the issue I’m thinking of it’s because Apple Mojave started
> rejecting carts that have a validity date shorter than a certain period of
> time which ovirt ca does not follow. I posted another message on this group
> about it a little while ago and I think a bug report was made.
>
> The only way I can get novnc to work in Mac is by using Firefox and making
> sure the ca is imported and trusted by Firefox. I cannot get it to work
> with safari or chrome.
>
> On Thu, May 28, 2020 at 8:08 AM Louis Bohm  wrote:
>
>> So as I said before I added the CA cert to my MAC (and I can see it in
>> the MAC’s Keychain).  But its still not working.  For humor I will try
>> adding the CA to my Windows VM and see if that produces a different result.
>>
>> Louis
>> -<<—->>-
>> Louis Bohm
>> louisb...@gmail.com
>>
>>
>> <https://www.youracclaim.com/badges/f11e0d65-21ad-4458-895b-2c5b5cb11134/public_url>
>>
>> <https://www.youracclaim.com/badges/f11e0d65-21ad-4458-895b-2c5b5cb11134/public_url>
>>
>> On May 27, 2020, at 11:01 AM, Scott Dickerson 
>> wrote:
>>
>>
>> On Wed, May 27, 2020 at 7:42 AM Louis Bohm  wrote:
>>
>>> OS: Oracle Linux 7.8 (unbreakable kernel)
>>> Using Oracle Linux Virtualization Manager: Software
>>> Version:4.3.6.6-1.0.9.el7
>>>
>>> Since I am running all of it on one physical machine I opted to install
>>> the ovirt-engine using the accept defaults option.
>>>
>>> When I try to start a noVNC console I see this in the messages file:
>>>
>>> May 26 16:49:12 lfg-kvm saslpasswd2: Could not find keytab file:
>>> /etc/qemu/krb5.tab: No such file or directory
>>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>>> May 26 16:49:14 lfg-kvm journal: 2020-05-26 16:49:14,704-0400
>>> ovirt-websocket-proxy: INFO msg:824 handler exception: [SSL:
>>> SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown
>>> (_ssl.c:618)
>>> May 26 16:49:14 lfg-kvm ovirt-websocket-proxy.py:
>>> ovirt-websocket-proxy[14582] INFO msg:824 handler exception: [SSL:
>>> SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown
>>> (_ssl.c:618)
>>>
>>>
>>> I have checked the following:
>>>
>>> [root@lfg-kvm ~]#  engine-config -g WebSocketProxy
>>> WebSocketProxy: lfg-kvm.corp.lfg.com:6100 version: general
>>> [root@lfg-kvm ~]# engine-config -g SpiceProxyDefault
>>> SpiceProxyDefault: http://lfg-kvm.corp.lfg.com:6100 version: general
>>>
>>>
>>> This is a brand new install.
>>>
>>> I also am unable to get a VNC console up and running.  I have tried with
>>> an Ubuntu VM running on my MAC where I installed virt-manager.  The viewer
>>> comes up for a second says it cannot connect and then shutsdown.
>>>
>>>
>> If you're only using noVNC, then you need to make sure you import the CA
>> Cert and trust it in your browser.  There is no way to interactively accept
>> the self-signed cert from the engine when noVNC connects via the websocket
>> proxy.
>>
>>
>>> Anyone have any clue?
>>> -<<—->>-
>>> Louis Bohm
>>> louisb...@gmail.com
>>>
>>> <https://www.youracclaim.com/badges/f11e0d65-21ad-4458-895b-2c5b5cb11134/public_url>
>>>
>>> <https://www.youracclaim.com/badges/f11e0d65-21ad-4458-895b-2c5b5cb11134/public_url>
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U66GSTI4QJSGPM6LUVF2WC2UW5JQCNCX/
>>>
>

[ovirt-users] Re: ovirt-websocket-proxy errors when trying noVNC

2020-05-28 Thread Jayme
If it’s the issue I’m thinking of it’s because Apple Mojave started
rejecting carts that have a validity date shorter than a certain period of
time which ovirt ca does not follow. I posted another message on this group
about it a little while ago and I think a bug report was made.

The only way I can get novnc to work in Mac is by using Firefox and making
sure the ca is imported and trusted by Firefox. I cannot get it to work
with safari or chrome.

On Thu, May 28, 2020 at 8:08 AM Louis Bohm  wrote:

> So as I said before I added the CA cert to my MAC (and I can see it in the
> MAC’s Keychain).  But its still not working.  For humor I will try adding
> the CA to my Windows VM and see if that produces a different result.
>
> Louis
> -<<—->>-
> Louis Bohm
> louisb...@gmail.com
>
>
> 
>
> 
>
> On May 27, 2020, at 11:01 AM, Scott Dickerson  wrote:
>
>
> On Wed, May 27, 2020 at 7:42 AM Louis Bohm  wrote:
>
>> OS: Oracle Linux 7.8 (unbreakable kernel)
>> Using Oracle Linux Virtualization Manager: Software
>> Version:4.3.6.6-1.0.9.el7
>>
>> Since I am running all of it on one physical machine I opted to install
>> the ovirt-engine using the accept defaults option.
>>
>> When I try to start a noVNC console I see this in the messages file:
>>
>> May 26 16:49:12 lfg-kvm saslpasswd2: Could not find keytab file:
>> /etc/qemu/krb5.tab: No such file or directory
>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>> May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb:
>> BDB0073 DB_NOTFOUND: No matching key/data pair found
>> May 26 16:49:14 lfg-kvm journal: 2020-05-26 16:49:14,704-0400
>> ovirt-websocket-proxy: INFO msg:824 handler exception: [SSL:
>> SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown
>> (_ssl.c:618)
>> May 26 16:49:14 lfg-kvm ovirt-websocket-proxy.py:
>> ovirt-websocket-proxy[14582] INFO msg:824 handler exception: [SSL:
>> SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown
>> (_ssl.c:618)
>>
>>
>> I have checked the following:
>>
>> [root@lfg-kvm ~]#  engine-config -g WebSocketProxy
>> WebSocketProxy: lfg-kvm.corp.lfg.com:6100 version: general
>> [root@lfg-kvm ~]# engine-config -g SpiceProxyDefault
>> SpiceProxyDefault: http://lfg-kvm.corp.lfg.com:6100 version: general
>>
>>
>> This is a brand new install.
>>
>> I also am unable to get a VNC console up and running.  I have tried with
>> an Ubuntu VM running on my MAC where I installed virt-manager.  The viewer
>> comes up for a second says it cannot connect and then shutsdown.
>>
>>
> If you're only using noVNC, then you need to make sure you import the CA
> Cert and trust it in your browser.  There is no way to interactively accept
> the self-signed cert from the engine when noVNC connects via the websocket
> proxy.
>
>
>> Anyone have any clue?
>> -<<—->>-
>> Louis Bohm
>> louisb...@gmail.com
>>
>> 
>>
>> 
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U66GSTI4QJSGPM6LUVF2WC2UW5JQCNCX/
>>
>
>
> --
> Scott Dickerson
> Senior Software Engineer
> RHV-M Engineering - UX Team
> Red Hat, Inc
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WLZEDVEV5E4XTEM4Y6M4W3VJ4ODSISUS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F5PGDWYJM7IUAP67KGSHNKU377QEI3Q4/


[ovirt-users] Re: ovirt-node-ng-installer-4.4.0-2020051507.el8.iso does not support PREC5 raid controller ?

2020-05-15 Thread Jayme
This is likely due to centos8 not node image in particular. Centos8 dropped
support for many lsi raid controllers including older perc controllers.
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/hardware-enablement_considerations-in-adopting-rhel-8#removed-hardware-support_hardware-enablement

It is possible to load drivers during install. I have not done it with node
but I know it’s possible with regular centos8 install.



On Fri, May 15, 2020 at 8:04 PM  wrote:

> Trying to use my dell 2850 as ovirt node the install does not show the
> raid0 disk pair that ovirt-node-4.3.9 was able to use as install
> destination.
> The Installer shows no disk at all in system it has 6 seen by 4.3.9.
>
> Thanks Bryan
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDSVUZSESOXEFJNPHOXUH4HOOWRIRSB4/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YXSXVT23KMEHLUT2F6ESWIETRODWQPS2/


[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-04-28 Thread Jayme
Has the drive been used before, it might have existing partition/filesystem
on it? If you are sure it's fine to overwrite try running wipefs -a
/dev/sdb on all hosts. Also make sure there aren't any filters setup in
lvm.conf (there shouldn't be on fresh install, but worth checking).

On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq  wrote:

> Hi,
>
> I'm running the gluster deployment flow and am trying to use a second
> drive as the gluster volume.  It's /dev/sdb on each node and I'm using the
> JBOD mode.
>
> I'm seeing the following gluster ansible task fail and a google search
> doesn't bring up much.
>
> TASK [gluster.infra/roles/backend_setup : Create volume groups]
> 
>
> failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname':
> u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item",
> "changed": false, "err": "  Couldn't find device with uuid
> Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n  Couldn't find device with uuid
> tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n  Couldn't find device with uuid
> RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n  Couldn't find device with uuid
> lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n  Device /dev/sdb excluded by a
> filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
> "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUFJ56FGJI3TYWT6NOLAZ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VTE5EPSGAAMXRLFQ75CHDW7MMPO5FGGC/


[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Jayme
Oh and also gluster interface should not be set as default route either.

On Tue, Apr 28, 2020 at 7:19 PM Jayme  wrote:

> On gluster interface try setting gateway to 10.0.1.1
>
> If that doesn’t work let us know where the process is failing currently
> and with what errors etc.
>
> On Tue, Apr 28, 2020 at 6:54 PM Shareef Jalloq 
> wrote:
>
>> Thanks.  I have the DNS but must have my interface config wrong.  On my
>> first node I have two interfaces in use, em1 for the management interface
>> and p1p1 for the Gluster interface.
>>
>> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-em1
>>
>> TYPE=Ethernet
>>
>> PROXY_METHOD=none
>>
>> BROWSER_ONLY=no
>>
>> BOOTPROTO=none
>>
>> DEFROUTE=yes
>>
>> IPV4_FAILURE_FATAL=no
>>
>> IPV6INIT=no
>>
>> IPV6_AUTOCONF=yes
>>
>> IPV6_DEFROUTE=yes
>>
>> IPV6_FAILURE_FATAL=no
>>
>> IPV6_ADDR_GEN_MODE=stable-privacy
>>
>> NAME=em1
>>
>> UUID=724cddb2-8ce9-43ea-8c0e-e1aff19e72cc
>>
>> DEVICE=em1
>>
>> ONBOOT=yes
>>
>> IPADDR=10.0.0.31
>>
>> PREFIX=24
>>
>> GATEWAY=10.0.0.1
>>
>> DNS1=10.0.0.1
>>
>>
>> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-p1p1
>>
>> TYPE=Ethernet
>>
>> PROXY_METHOD=none
>>
>> BROWSER_ONLY=no
>>
>> BOOTPROTO=none
>>
>> DEFROUTE=yes
>>
>> IPV4_FAILURE_FATAL=no
>>
>> IPV6INIT=no
>>
>> IPV6_AUTOCONF=yes
>>
>> IPV6_DEFROUTE=yes
>>
>> IPV6_FAILURE_FATAL=no
>>
>> IPV6_ADDR_GEN_MODE=stable-privacy
>>
>> NAME=p1p1
>>
>> UUID=1adb45d3-4dac-4bac-bb19-257fb9c7016b
>>
>> DEVICE=p1p1
>>
>> ONBOOT=yes
>>
>> IPADDR=10.0.1.31
>>
>> PREFIX=24
>>
>> GATEWAY=10.0.0.1
>>
>> DNS1=10.0.0.1
>>
>> On Tue, Apr 28, 2020 at 10:47 PM Jayme  wrote:
>>
>>>  You should use host names for gluster like gluster1.hostname.com that
>>> resolve to the ip chosen for gluster.
>>>
>>> For my env I have something like this:
>>>
>>> Server0:
>>> Host0.example.com 10.10.0.100
>>> Gluster0.example.com 10.0.1.100
>>>
>>> Same thing for other two severs except hostnames and ips of course.
>>>
>>> Use the gluster hostnames for the first step then the sever hostnames
>>> for the others.
>>>
>>> I made sure I could ssh to and from both hostX and glusterX on each
>>> server.
>>>
>>> On Tue, Apr 28, 2020 at 6:34 PM Shareef Jalloq 
>>> wrote:
>>>
>>>> Perhaps it's me, but these two documents seem to disagree on what
>>>> hostnames to use when setting up.  Can someone clarify.
>>>>
>>>> The main documentation here:
>>>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>>>>  talks
>>>> about copying the SSH keys to the gluster host address but the old blog
>>>> post with an outdated interface here:
>>>> https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>>>>  uses
>>>> the node address.
>>>>
>>>> In the first step of the hyperconverged Gluster wizard, when it asks
>>>> for "Gluster network address", is this wanting the host IP or the IP of the
>>>> Gluster interface?
>>>>
>>>> On Tue, Apr 28, 2020 at 10:24 PM Shareef Jalloq 
>>>> wrote:
>>>>
>>>>> OK, thanks both, that seems to have fixed that issue.
>>>>>
>>>>> Is there any other config I need to do because the next step in the
>>>>> deployment guide of copying SSH keys seems to take over a minute just to
>>>>> prompt for a password.  Something smells here.
>>>>>
>>>>> On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:
>>>>>
>>>>>> You should be using a different subnet for each. I.e. 10.0.0.30 and
>>>>>> 10.0.1.30 for example
>>>>>>
>>>>>> On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I'm in the process of trying to set up an HCI 3 node cluster in my
>>>>>>> homelab to better understand the Gluster setup and have failed at the 
>>>&g

[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Jayme
On gluster interface try setting gateway to 10.0.1.1

If that doesn’t work let us know where the process is failing currently and
with what errors etc.

On Tue, Apr 28, 2020 at 6:54 PM Shareef Jalloq  wrote:

> Thanks.  I have the DNS but must have my interface config wrong.  On my
> first node I have two interfaces in use, em1 for the management interface
> and p1p1 for the Gluster interface.
>
> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-em1
>
> TYPE=Ethernet
>
> PROXY_METHOD=none
>
> BROWSER_ONLY=no
>
> BOOTPROTO=none
>
> DEFROUTE=yes
>
> IPV4_FAILURE_FATAL=no
>
> IPV6INIT=no
>
> IPV6_AUTOCONF=yes
>
> IPV6_DEFROUTE=yes
>
> IPV6_FAILURE_FATAL=no
>
> IPV6_ADDR_GEN_MODE=stable-privacy
>
> NAME=em1
>
> UUID=724cddb2-8ce9-43ea-8c0e-e1aff19e72cc
>
> DEVICE=em1
>
> ONBOOT=yes
>
> IPADDR=10.0.0.31
>
> PREFIX=24
>
> GATEWAY=10.0.0.1
>
> DNS1=10.0.0.1
>
>
> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-p1p1
>
> TYPE=Ethernet
>
> PROXY_METHOD=none
>
> BROWSER_ONLY=no
>
> BOOTPROTO=none
>
> DEFROUTE=yes
>
> IPV4_FAILURE_FATAL=no
>
> IPV6INIT=no
>
> IPV6_AUTOCONF=yes
>
> IPV6_DEFROUTE=yes
>
> IPV6_FAILURE_FATAL=no
>
> IPV6_ADDR_GEN_MODE=stable-privacy
>
> NAME=p1p1
>
> UUID=1adb45d3-4dac-4bac-bb19-257fb9c7016b
>
> DEVICE=p1p1
>
> ONBOOT=yes
>
> IPADDR=10.0.1.31
>
> PREFIX=24
>
> GATEWAY=10.0.0.1
>
> DNS1=10.0.0.1
>
> On Tue, Apr 28, 2020 at 10:47 PM Jayme  wrote:
>
>>  You should use host names for gluster like gluster1.hostname.com that
>> resolve to the ip chosen for gluster.
>>
>> For my env I have something like this:
>>
>> Server0:
>> Host0.example.com 10.10.0.100
>> Gluster0.example.com 10.0.1.100
>>
>> Same thing for other two severs except hostnames and ips of course.
>>
>> Use the gluster hostnames for the first step then the sever hostnames for
>> the others.
>>
>> I made sure I could ssh to and from both hostX and glusterX on each
>> server.
>>
>> On Tue, Apr 28, 2020 at 6:34 PM Shareef Jalloq 
>> wrote:
>>
>>> Perhaps it's me, but these two documents seem to disagree on what
>>> hostnames to use when setting up.  Can someone clarify.
>>>
>>> The main documentation here:
>>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>>>  talks
>>> about copying the SSH keys to the gluster host address but the old blog
>>> post with an outdated interface here:
>>> https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>>>  uses
>>> the node address.
>>>
>>> In the first step of the hyperconverged Gluster wizard, when it asks for
>>> "Gluster network address", is this wanting the host IP or the IP of the
>>> Gluster interface?
>>>
>>> On Tue, Apr 28, 2020 at 10:24 PM Shareef Jalloq 
>>> wrote:
>>>
>>>> OK, thanks both, that seems to have fixed that issue.
>>>>
>>>> Is there any other config I need to do because the next step in the
>>>> deployment guide of copying SSH keys seems to take over a minute just to
>>>> prompt for a password.  Something smells here.
>>>>
>>>> On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:
>>>>
>>>>> You should be using a different subnet for each. I.e. 10.0.0.30 and
>>>>> 10.0.1.30 for example
>>>>>
>>>>> On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq 
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I'm in the process of trying to set up an HCI 3 node cluster in my
>>>>>> homelab to better understand the Gluster setup and have failed at the 
>>>>>> first
>>>>>> hurdle. I've set up the node interfaces on the built in NIC and am using 
>>>>>> a
>>>>>> PCI NIC for the Gluster traffic - at the moment this is 1Gb until I can
>>>>>> upgrade - and I've assigned a static IP to both interfaces and also have
>>>>>> both entries in my DNS.
>>>>>>
>>>>>> From any of the three nodes, I can ping the gateway, the other nodes,
>>>>>> any external IP but I can't ping any of the Gluster NICs.  What have I
>>>>>> forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is 
>>>>>>

[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Jayme
 You should use host names for gluster like gluster1.hostname.com that
resolve to the ip chosen for gluster.

For my env I have something like this:

Server0:
Host0.example.com 10.10.0.100
Gluster0.example.com 10.0.1.100

Same thing for other two severs except hostnames and ips of course.

Use the gluster hostnames for the first step then the sever hostnames for
the others.

I made sure I could ssh to and from both hostX and glusterX on each server.

On Tue, Apr 28, 2020 at 6:34 PM Shareef Jalloq  wrote:

> Perhaps it's me, but these two documents seem to disagree on what
> hostnames to use when setting up.  Can someone clarify.
>
> The main documentation here:
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>  talks
> about copying the SSH keys to the gluster host address but the old blog
> post with an outdated interface here:
> https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>  uses
> the node address.
>
> In the first step of the hyperconverged Gluster wizard, when it asks for
> "Gluster network address", is this wanting the host IP or the IP of the
> Gluster interface?
>
> On Tue, Apr 28, 2020 at 10:24 PM Shareef Jalloq 
> wrote:
>
>> OK, thanks both, that seems to have fixed that issue.
>>
>> Is there any other config I need to do because the next step in the
>> deployment guide of copying SSH keys seems to take over a minute just to
>> prompt for a password.  Something smells here.
>>
>> On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:
>>
>>> You should be using a different subnet for each. I.e. 10.0.0.30 and
>>> 10.0.1.30 for example
>>>
>>> On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I'm in the process of trying to set up an HCI 3 node cluster in my
>>>> homelab to better understand the Gluster setup and have failed at the first
>>>> hurdle. I've set up the node interfaces on the built in NIC and am using a
>>>> PCI NIC for the Gluster traffic - at the moment this is 1Gb until I can
>>>> upgrade - and I've assigned a static IP to both interfaces and also have
>>>> both entries in my DNS.
>>>>
>>>> From any of the three nodes, I can ping the gateway, the other nodes,
>>>> any external IP but I can't ping any of the Gluster NICs.  What have I
>>>> forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is the
>>>> motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
>>>> /etc/sysconfig/network-scripts/ifcfg- scripts are identical aside from
>>>> IPADDR, NAME, DEVICE and UUID fields.
>>>>
>>>> Thanks, Shareef.
>>>>
>>>> [root@ovirt-node-00 ~]# ip addr show
>>>>
>>>>
>>>> 2: p1p1:  mtu 1500 qdisc mq state UP
>>>> group default qlen 1000
>>>>
>>>> link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff
>>>>
>>>> inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1
>>>>
>>>>valid_lft forever preferred_lft forever
>>>>
>>>> inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global
>>>> mngtmpaddr dynamic
>>>>
>>>>valid_lft 7054sec preferred_lft 7054sec
>>>>
>>>> inet6 fe80::a236:9fff:fe1f:f978/64 scope link
>>>>
>>>>valid_lft forever preferred_lft forever
>>>>
>>>>
>>>> 4: em1:  mtu 1500 qdisc pfifo_fast
>>>> state UP group default qlen 1000
>>>>
>>>> link/ether 98:90:96:a1:16:ad brd ff:ff:ff:ff:ff:ff
>>>>
>>>> inet 10.0.0.31/24 brd 10.0.0.255 scope global noprefixroute em1
>>>>
>>>>valid_lft forever preferred_lft forever
>>>>
>>>> inet6 fd4d:e9e3:6f5:1:9a90:96ff:fea1:16ad/64 scope global
>>>> mngtmpaddr dynamic
>>>>
>>>>valid_lft 7054sec preferred_lft 7054sec
>>>>
>>>> inet6 fe80::9a90:96ff:fea1:16ad/64 scope link
>>>>
>>>>valid_lft forever preferred_lft forever
>>>>
>>>>
>>>> ___
>>>> Users mailing list -- users@ovirt.org
>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7UESGZ6MJXPVKN2UZJTO4OZYGOQIWHE/
>>>>
>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KVMZSD7YPPSCFO6RKTRKA2BAVJGAFDRE/


[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Jayme
You should be using a different subnet for each. I.e. 10.0.0.30 and
10.0.1.30 for example

On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq  wrote:

> Hi,
>
> I'm in the process of trying to set up an HCI 3 node cluster in my homelab
> to better understand the Gluster setup and have failed at the first hurdle.
> I've set up the node interfaces on the built in NIC and am using a PCI NIC
> for the Gluster traffic - at the moment this is 1Gb until I can upgrade -
> and I've assigned a static IP to both interfaces and also have both entries
> in my DNS.
>
> From any of the three nodes, I can ping the gateway, the other nodes, any
> external IP but I can't ping any of the Gluster NICs.  What have I
> forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is the
> motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
> /etc/sysconfig/network-scripts/ifcfg- scripts are identical aside from
> IPADDR, NAME, DEVICE and UUID fields.
>
> Thanks, Shareef.
>
> [root@ovirt-node-00 ~]# ip addr show
>
>
> 2: p1p1:  mtu 1500 qdisc mq state UP
> group default qlen 1000
>
> link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff
>
> inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1
>
>valid_lft forever preferred_lft forever
>
> inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global mngtmpaddr
> dynamic
>
>valid_lft 7054sec preferred_lft 7054sec
>
> inet6 fe80::a236:9fff:fe1f:f978/64 scope link
>
>valid_lft forever preferred_lft forever
>
>
> 4: em1:  mtu 1500 qdisc pfifo_fast state
> UP group default qlen 1000
>
> link/ether 98:90:96:a1:16:ad brd ff:ff:ff:ff:ff:ff
>
> inet 10.0.0.31/24 brd 10.0.0.255 scope global noprefixroute em1
>
>valid_lft forever preferred_lft forever
>
> inet6 fd4d:e9e3:6f5:1:9a90:96ff:fea1:16ad/64 scope global mngtmpaddr
> dynamic
>
>valid_lft 7054sec preferred_lft 7054sec
>
> inet6 fe80::9a90:96ff:fea1:16ad/64 scope link
>
>valid_lft forever preferred_lft forever
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7UESGZ6MJXPVKN2UZJTO4OZYGOQIWHE/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I2UVP4THQIODVBRN46IHDYYDIWBFLG4E/


[ovirt-users] Re: VM disk I/O

2020-04-21 Thread Jayme
What is the vm optimizer you speak of?

Have you tried the high performance vm profile? When set it will prompt you
to make additional manual changes such as configuring numa and hugepages
etc



On Tue, Apr 21, 2020 at 8:52 AM  wrote:

> On oVirt 4.3. i installed w10_64 with q35 cpu.
> i've used vm optimizer for better performans for end-users. it seams good.
> But i need more performance guidelines.
> Ex.
> Our system has FC storage, is tere any options for better read/write
> performans, Hugepage, write through
> Like this, if you have any suggestions, could you share
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KH6ULJYIRBPDNEAR5CASDY2IOT3ARVHA/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VCWGMW42Z77FUQ7L27IFAMRJHB2P5HUS/


[ovirt-users] Re: VM's in status unknown

2020-04-16 Thread Jayme
Do you have the guest agent installed on the VMs?

On Thu, Apr 16, 2020 at 2:55 PM  wrote:

> Are you getting any errors in the engine log or
> /var/log/libvirt/qemu/.log?
> I have Windows 10 and haven't experienced that. You can't shut it down in
> the UI? Even after you try to shut it down inside Windows?
> I will assume you have the latest guest tools installed.
>
> Eric Evans
> Digital Data Services LLC.
> 304.660.9080
>
>
> -Original Message-
> From: kim.karga...@noroff.no 
> Sent: Thursday, April 16, 2020 8:23 AM
> To: users@ovirt.org
> Subject: [ovirt-users] VM's in status unknown
>
> Hi,
>
> We have a few Windows 10 VM's running on our ovirt 4.3, where when you
> shutdown the VM from within Windows, that VM does not shut down but gets a
> status of unknown in ovirt and one cannot do anything to the machines
> within the web gui. This seems to specfically be related to that Windows 10
> template that we have created. Any ideas? Also, any ideas on how one can
> shut these machines down?
>
> Thanks
>
> Kim
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:
> https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UM26TABUTG373QVXEI4UJN3EAKANLWHL/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/M4LITHLIZXJDTZACENZX4NLO7LSB6VAM/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/77WG7WMSTFE7XBJN6EU3E3AZMV36TZ6B/


[ovirt-users] Re: How to load VMs after importing a domain

2020-04-16 Thread Jayme
In oVirt admin go to Storage > Domains. Click your storage domain. Click
"Virtual Machines" tab. You should see a list of VMs on that storage
domain. Click one or highlight multiple then click import.

On Thu, Apr 16, 2020 at 2:34 PM  wrote:

> If you click on the 3 dots in the vm portal, there is an import there,
> then chose what you import from.
>
> See attached screenshot.
>
> Is this what your looking for?
>
>
>
> Eric Evans
>
> Digital Data Services LLC.
>
> 304.660.9080
>
>
>
> *From:* Shareef Jalloq 
> *Sent:* Thursday, April 16, 2020 10:11 AM
> *To:* Ovirt Users 
> *Subject:* [ovirt-users] How to load VMs after importing a domain
>
>
>
> I've followed the online instructions on importing a pre-configured domain
> into a new data centre but I can't see how to import the VMs.  The
> documentation just says, "You can now import virtual machines and templates
> from the storage domain to the data center." with no other info.
>
>
>
> What do I need to do in order to get my VMs up and running?
>
>
>
> Cheers, Shareef.
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ECF6LZMYPEOTZRJ4UTGGLBMJFVWNLFSR/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXN6VZDQL2E3IMSZ4KPN6NGA63PH73WN/


[ovirt-users] Re: Can't deploy engine vm with ovirt-hosted-engine-setup

2020-04-14 Thread Jayme
The error suggests a problem with ansible. What packages are you using?

On Tue, Apr 14, 2020 at 1:51 AM Gabriel Bueno  wrote:

> Does anyone have any clue that it may be happening?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UML2K4XNGD6JBTQEYDYQS2ZQABOC6X3T/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PALXTTJXGPL5GEDDHLZH3AO5RB53ESZB/


[ovirt-users] engine certificate problems in MacOS Catalina

2020-04-13 Thread Jayme
I recently setup a new ovirt environment using latest 4.3.9 installer. I
can't seem to get the novnc client to work for the life of me in safari or
chrome on MacOS catalina.

I have downloaded the CA from the login page and imported it into keychain
and made sure it was fully trusted. In both system and login keychains.

Looking at this: https://support.apple.com/en-us/HT210176 seems to suggest
that the certificate may be invalid if the issue date is longer than 825
days. The certificate generated by ovirt installer seems to be valid for 5
years. I'm not sure if this is the issue or something else is wrong but if
cert length is a problem is there a way for me to re-regenerate a new
certificate with a shorter issue period?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNL6NSW6GP3IR7GECYE6DNPJA6H2X3RB/


[ovirt-users] snapshot options on remote NFS storage

2020-04-03 Thread Jayme
Was wondering if there are any guides or if anyone could share their
storage configuration details for NFS. If using LVM is it safe to snapshot
volumes with running VM images for backup purposes?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XWTO4BVMIJRPF7EMEGOK2XTZZU6PIPYK/


[ovirt-users] Re: Speed Issues

2020-03-27 Thread Jayme
Christian,

I've been following along with interest, as I've also been trying
everything I can to improve gluster performance in my HCI cluster. My issue
is mostly latency related and my workloads are typically small file
operations which have been especially challenging.

Couple of things

1. About the MTU, did you also enable jumbo frames at switch level (if
applicable)? I have jumbo frames enabled but honestly didn't see much of an
impact from doing so.

2. About libgfapi. It's actually quite simple to enable it (at least if you
want to do some testing). It can be enabled on the hosted engine using
engine-config i.e. *engine-config -s LibgfApiSupported=true -- *from my
experience you can do this while VMs are running and they won't pick up the
new config under powered off/restarted. So you are able to test it out on
one VM. Again, as and some others have mentioned this is not a default
option in oVirt because there are known bugs with the libgfapi
implementation. Some others have worked around these bugs in various ways
but like you, I am not willing to do so in a production environment. Still,
I think it's very much worth doing some tests on a VM with libgfapi enabled
compared to default fuse mount.



On Fri, Mar 27, 2020 at 7:44 AM Christian Reiss 
wrote:

> Hey,
>
> thanks for writing. If I go for dont choose local my speed drops
> dramatically (halving). Speed between the hosts is okay (tested) but for
> some odd reason the mtu is at 1500 still. I was sure I set it to
> jumbo/9k. Oh well.
>
> Not during runtime. I can hear the gluster scream if the network dies
> for a second :)
>
> -Chris.
>
> On 24/03/2020 18:33, Darrell Budic wrote:
>  >
>  > cluster.choose-local: false
>  > cluster.read-hash-mode: 3
>  >
>  > if you have separate servers or nodes with are not HCI to allow it
>  > spread reads over multiple nodes.
> --
>   Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
> supp...@alpha-labs.net   \ /Campaign
>   X   against HTML
>   WEB alpha-labs.net / \   in eMails
>
>   GPG Retrieval https://gpg.christian-reiss.de
>   GPG ID ABCD43C5, 0x44E29126ABCD43C5
>   GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5
>
>   "It's better to reign in hell than to serve in heaven.",
>John Milton, Paradise lost.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QYS7RIHXYAYW7XTPFVZBUHNGPFQMYA7H/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5JBBWOM3KGQ3FPY2OCW7ZBD4EGFEGDTR/


[ovirt-users] Re: Speed Issues

2020-03-24 Thread Jayme
I strongly believe that FUSE mount is the real reason for poor performance
in HCI and these minor gluster and other tweaks won't satisfy most seeking
i/o performance. Enabling libgfapi is probably the best option. Redhat has
recently closed bug reports related to libgfapi citing won't fix and one
comment suggests that libgfapi was not showing good enough performance to
bother with which appears to contradict what many oVirt users are seeing.
It's confusing to me why libgfapi as a default option is not being given
any priority.

https://bugzilla.redhat.com/show_bug.cgi?id=1465810

"We do not plan to enable libgfapi for oVirt/RHV. We did not find enough
performance improvement justification for it"

On Tue, Mar 24, 2020 at 3:34 PM Alex McWhirter  wrote:

> Red hat also recommends a shard size of 512mb, it's actually the only
> shard size they support. Also check the chunk size on the LVM thin pools
> running the bricks, should be at least 2mb. Note that changing the shard
> size only applies to new VM disks after the change. Changing the chunk
> size requires making a new brick.
>
> libgfapi brings a huge performance boost, in my opinion its almost a
> necessity unless you have a ton of extra disk speed / network
> throughput. Just be aware of the caveats.
>
> On 2020-03-24 14:12, Strahil Nikolov wrote:
> > On March 24, 2020 7:33:16 PM GMT+02:00, Darrell Budic
> >  wrote:
> >> Christian,
> >>
> >> Adding on to Stahil’s notes, make sure you’re using jumbo MTUs on
> >> servers and client host nodes. Making sure you’re using appropriate
> >> disk schedulers on hosts and VMs is important, worth double checking
> >> that it’s doing what you think it is. If you are only HCI, gluster’s
> >> choose-local on is a good thing, but try
> >>
> >> cluster.choose-local: false
> >> cluster.read-hash-mode: 3
> >>
> >> if you have separate servers or nodes with are not HCI to allow it
> >> spread reads over multiple nodes.
> >>
> >> Test out these settings if you have lots of RAM and cores on your
> >> servers, they work well for me with 20 cores and 64GB ram on my
> >> servers
> >> with my load:
> >>
> >> performance.io-thread-count: 64
> >> performance.low-prio-threads: 32
> >>
> >> these are worth testing for your workload.
> >>
> >> If you’re running VMs with these, test out libglapi connections, it’s
> >> significantly better for IO latency than plain fuse mounts. If you can
> >> tolerate the issues, the biggest one at the moment being you can’t
> >> take
> >> snapshots of the VMs with it enabled as of March.
> >>
> >> If you have tuned available, I use throughput-performance on my
> >> servers
> >> and guest-host on my vm nodes, throughput-performance on some HCI
> >> ones.
> >>
> >>
> >> I’d test with out the fips-rchecksum setting, that may be creating
> >> extra work for your servers.
> >>
> >> If you mounted individual bricks, check that you disabled barriers on
> >> them at mount if appropriate.
> >>
> >> Hope it helps,
> >>
> >>  -Darrell
> >>
> >>> On Mar 24, 2020, at 6:23 AM, Strahil Nikolov 
> >> wrote:
> >>>
> >>> On March 24, 2020 11:20:10 AM GMT+02:00, Christian Reiss
> >>  wrote:
>  Hey Strahil,
> 
>  seems you're the go-to-guy with pretty much all my issues. I thank
> >> you
>  for this and your continued support. Much appreciated.
> 
> 
>  200mb/reads however seems like a broken config or malfunctioning
>  gluster
>  than requiring performance tweaks. I enabled profiling so I have
> >> real
>  life data available. But seriously even without tweaks I would like
>  (need) 4 times those numbers, 800mb write speed is okay'ish, given
> >> the
>  fact that 10gbit backbone can be the limiting factor.
> 
>  We are running BigCouch/CouchDB Applications that really really need
>  IO.
>  Not in throughput but in response times. 200mb/s is just way off.
> 
>  It feels as gluster can/should do more, natively.
> 
>  -Chris.
> 
>  On 24/03/2020 06:17, Strahil Nikolov wrote:
> > Hey Chris,,
> >
> > You got some options.
> > 1. To speedup the reads in HCI - you can use the option :
> > cluster.choose-local: on
> > 2. You can adjust the server and client event-threads
> > 3. You can use NFS Ganesha (which connects to all servers via
>  libgfapi)  as a NFS Server.
> > In such case you have to use some clustering like ctdb or
> >> pacemaker.
> > Note:disable cluster.choose-local if you use this one
> > 4 You can try the built-in NFS , although it's deprecated (NFS
>  Ganesha is fully supported)
> > 5.  Create a gluster profile during the tests. I have seen numerous
>  improperly selected tests -> so test with real-world  workload.
>  Synthetic tests are not good.
> >
> > Best Regards,
> > Strahil Nikolov
> >>>
> >>> Hey Chris,
> >>>
> >>> What type is your VM ?
> >>> Try with 'High Performance' one (there is a  good RH documentation on
> >> that topic).
> >>>
> >>> 

[ovirt-users] Re: Artwork: 4.4 GA banners

2020-03-24 Thread Jayme
Hey Sandro,

Do you have more specific details or guidelines in regards to the graphics
you are looking for?

Thanks!

On Tue, Mar 24, 2020 at 1:27 PM Sandro Bonazzola 
wrote:

> Hi,
> in preparation of oVirt 4.4 GA it would be nice to have some graphics we
> can use for launching oVirt 4.4 GA on social media and oVirt website.
> If you don't have coding skills but you have marketing or design skills
> this is a good opportunity to contribute back to the project.
> Looking forward to your designs!
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> *
> *
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WQWKXCPQVII5SZX2AX2SGUYORDVG5KS6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XXQDEW5ONO6L7RFJY34EAZOHVRFP7WDD/


[ovirt-users] Re: Speed Issues

2020-03-23 Thread Jayme
I too struggle with speed issues in hci. Latency is a big problem with
writes for me especially when dealing with small file workloads. How are
you testing exactly?

Look into enabling libgfapi and try some comparisons with that. People have
been saying it’s much faster, but it’s not a default option and has a few
bugs. Redhat devs do not appear to be giving its implementation any
priority unfortunately.

I’ve been considering switching to nfs storage because I’m seeing much
better performance in testing with it. I have some nvme drives on the way
and am curious how they would perform in hci but I’m thinking the issue is
not a disk bottleneck (that appears very obvious in your case as well)



On Mon, Mar 23, 2020 at 6:44 PM Christian Reiss 
wrote:

> Hey folks,
>
> gluster related question. Having SSD in a RAID that can do 2 GB writes
> and Reads (actually above, but meh) in a 3-way HCI cluster connected
> with 10gbit connection things are pretty slow inside gluster.
>
> I have these settings:
>
> Options Reconfigured:
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.shd-max-threads: 8
> features.shard: on
> features.shard-block-size: 64MB
> server.event-threads: 8
> user.cifs: off
> cluster.shd-wait-qlength: 1
> cluster.locking-scheme: granular
> cluster.eager-lock: enable
> performance.low-prio-threads: 32
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.choose-local: true
> client.event-threads: 16
> performance.strict-o-direct: on
> network.remote-dio: enable
> performance.client-io-threads: on
> nfs.disable: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> cluster.readdir-optimize: on
> cluster.metadata-self-heal: on
> cluster.data-self-heal: on
> cluster.entry-self-heal: on
> cluster.data-self-heal-algorithm: full
> features.uss: enable
> features.show-snapshot-directory: on
> features.barrier: disable
> auto-delete: enable
> snap-activate-on-create: enable
>
> Writing inside the /gluster_bricks yields those 2GB/sec writes, Reading
> the same.
>
> Reading inside the /rhev/data-center/mnt/glusterSD/ dir reads go down to
> 366mb/sec while writes plummet to to 200mb/sec.
>
> Summed up: Writing into the SSD Raid in the lvm/xfs gluster brick
> directory is fast, writing into the mounted gluster dir is horribly slow.
>
> The above can be seen and repeated on all 3 servers. The network can do
> full 10gbit (tested with, among others: rsync, iperf3).
>
> Anyone with some idea on whats missing/ going on here?
>
> Thanks folks,
> as always stay safe and healthy!
>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OMAAERV4IUISYEWD4QP5OAM4DK4JTTLF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5FGWRA4X53LPH42FHWEEQ7HLTZJQUGOL/


[ovirt-users] Re: Gluster Settings

2020-03-19 Thread Jayme
It applies a profile for the virt group. You can get more info here:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/app-virt_profile

Or you can look at the file directly, it’s basically just a list of gluster
volume options to be applied. I can’t remember off the top of my head what
location the profiles are in but it shouldn’t be too difficult to find.

On Thu, Mar 19, 2020 at 7:45 AM Christian Reiss 
wrote:

> Yeah,
>
> That button scares me. What does it do, precisely?
>
> On 19/03/2020 11:18, Jayme wrote:
> > At the very least you should make sure to apply the gluster virt profile
> > to vm volumes. This can also be done using optimize for virt store in
> > the ovirt GUI
>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7L2M4N6LAQST7ZKFVQ4FWDSF3BHKC7YQ/


[ovirt-users] Re: Gluster Settings

2020-03-19 Thread Jayme
At the very least you should make sure to apply the gluster virt profile to
vm volumes. This can also be done using optimize for virt store in the
ovirt GUI

On Thu, Mar 19, 2020 at 6:54 AM Christian Reiss 
wrote:

> Hey folks,
>
> quick question. For running Gluster / oVirt I found several places, some
> outdated (ovirt docs), gluster Mailinglists, oVirt Mailinglists etc that
> recommend different things.
>
> Here is what I found out/configured:
>
> features.barrier: disable
> features.show-snapshot-directory: on
> features.uss: enable
> cluster.data-self-heal-algorithm: full
> cluster.entry-self-heal: on
> cluster.data-self-heal: on
> cluster.metadata-self-heal: on
> cluster.readdir-optimize: on
> transport.address-family: inet
> storage.fips-mode-rchecksum: on
> nfs.disable: on
> performance.client-io-threads: off
> network.remote-dio: off
> performance.strict-o-direct: on
> client.event-threads: 16
> cluster.choose-local: true
> snap-activate-on-create: enable
> auto-delete: enable
>
> Would you agree or change anything (usual vm workload).
>
> Thanks! o/
> And keep healthy.
>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ABTBEHQG7A3F45F7TS2EB3KAGVHGUC5N/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YM445LCMHEH6XPUJG4EOPEGTUQJI75LS/


[ovirt-users] adding new gluster volume

2020-03-17 Thread Jayme
What if any steps do I need to take prior to adding an additional gluster
volume to my HCI cluster using new storage devices via the oVirt gui? Will
the gui prepare the devices (xfs/lvm etc) or do I need to do that prior?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EXFXCKCFXISAZVUJRPB5UBIK6HTCMH3J/


[ovirt-users] Re: Can anyone explain what gluster 4k support is?

2020-03-17 Thread Jayme
Is there performance improvements using larger block size?

On Tue, Mar 17, 2020 at 5:29 AM Vojtech Juranek  wrote:

> On pondělí 16. března 2020 22:53:32 CET Strahil Nikolov wrote:
> > On March 16, 2020 11:08:16 PM GMT+02:00, Vojtech Juranek
>  wrote:
> > >On středa 11. března 2020 21:13:13 CET Jayme wrote:
> > >> I noticed Gluster 4k support mentioned in recent oVirt release notes.
> > >
> > >Can
> > >
> > >> anyone explain what this is about?
> > >
> > >before we supported only disks with block size 512 B. Now, we support
> > >also
> > >disks with 4 kB (aka 4k), for now only on Gluster SD. If you want to
> > >learn more about
> > >this feature, you can check slides [1] and I noticed that videos
> > >recording of
> > >Nir's talk he had on Fosdem is already available, so you can watch
> > >whole
> > >talk [2].
> > >
> > >Vojta
> > >
> > >[1]
> > >
> https://docs.google.com/presentation/d/1ClLMZ4XAb8CPhYOw6mNpv0va5JWcAgpyFbJ
> > >n-Bml_aY/ [2] https://video.fosdem.org/2020/H.1309/vai_ovirt_4k.webm
> >
> > Can I switch off VDO's 512byte emulation ?
>
> yes, now Gluster SD should work without 512 block size
> emulation___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LEG5DXY3A5HSABLDHQAO2M53G3SC4E2U/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7FVGBJQZSIETJRWEUTR4NC4YQGFZX7SH/


[ovirt-users] Re: Ansible playbook timeout

2020-03-15 Thread Jayme
This is all that should be needed, I've done so on my engine and it works
fine to set the timeout much higher. My guess is that you did not restart
the engine after changing the config.

On Sun, Mar 15, 2020 at 10:44 AM Barrett Richardson 
wrote:

> Version 4.2.8.2-1.0.9.el7
>
> Per the info near the bottom of
> /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.conf I should be
> able to create this file,
>
> /etc/ovirt-engine/engine.conf.d/99-ansible-playbook-timeout.conf
>
> and place in the file these contents,
>
> ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=150
>
> and extend the playbook timeout.  It doesn't work, still times out after
> 30 minutes.  Any suggested workarounds?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UMKQAVQQIN6XU6LL4ZZDEBZH5DWZDMH6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/24EZTZQ4WFIVEWPZV3NYTWNNIHJ7D67J/


[ovirt-users] Can anyone explain what gluster 4k support is?

2020-03-11 Thread Jayme
I noticed Gluster 4k support mentioned in recent oVirt release notes. Can
anyone explain what this is about?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BWVZDUUBJHJHIVI4UJT5GSWFU4HP4V5B/


[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-08 Thread Jayme
Ok, this is more strange.  The same dd test against my ssd os/boot drives
on oVirt node hosts using the same model drive (only smaller) and same h310
controller (only diff being the os/boot drives are in raid mirror and
gluster drives are passthrough) test completes in <2 seconds in /tmp of
host but takes ~45 seconds in /gluster_bricks/brick_whatever

Is there any explanation why there is such a vast difference between the
two tests?

example of one my mounts:

/dev/mapper/onn_orchard1-tmp /tmp ext4 defaults,discard 1 2
/dev/gluster_vg_sda/gluster_lv_prod_a /gluster_bricks/brick_a xfs
inode64,noatime,nodiratime 0 0

On Sun, Mar 8, 2020 at 12:23 PM Jayme  wrote:

> Strahil,
>
> I'm starting to think that my problem could be related to the use of perc
> H310 mini raid controllers in my oVirt hosts. The os/boot SSDs are raid
> mirror but gluster storage is SSDs in passthrough. I've read that the queue
> depth of h310 card is very low and can cause performance issues
> especially when used with flash devices.
>
> dd if=/dev/zero of=test4.img bs=512 count=5000 oflag=dsync on one of my
> hosts gluster bricks /gluster_bricks/brick_a for example takes 45 seconds
> to complete.
>
> I can perform the same operation in ~2 seconds on another server with a
> better raid controller, but with the same model ssd.
>
> I might look at seeing how I can swap out the h310's, unfortunately I
> think that may require me to wipe the gluster storage drives as with
> another controller I believe they'd need to be added as single raid 0
> arrays and would need to be rebuilt to do so.
>
> If I were to take one host down at a time is there a way that I can
> re-build the entire server including wiping the gluster disks and add the
> host back into the ovirt cluster and rebuild it along with the bricks? How
> would you recommend doing such a task if I needed to wipe gluster disks on
> each host ?
>
>
>
> On Sat, Mar 7, 2020 at 6:24 PM Jayme  wrote:
>
>> No worries at all about the length of the email, the details are highly
>> appreciated. You've given me lots to look into and consider.
>>
>>
>>
>> On Sat, Mar 7, 2020 at 10:02 AM Strahil Nikolov 
>> wrote:
>>
>>> On March 7, 2020 1:12:58 PM GMT+02:00, Jayme  wrote:
>>> >Thanks again for the info. You’re probably right about the testing
>>> >method.
>>> >Though the reason I’m down this path in the first place is because I’m
>>> >seeing a problem in real world work loads. Many of my vms are used in
>>> >development environments where working with small files is common such
>>> >as
>>> >npm installs working with large node_module folders, ci/cd doing lots
>>> >of
>>> >mixed operations io and compute.
>>> >
>>> >I started testing some of these things by comparing side to side with a
>>> >vm
>>> >using same specs only difference being gluster vs nfs storage. Nfs
>>> >backed
>>> >storage is performing about 3x better real world.
>>> >
>>> >Gluster version is stock that comes with 4.3.7. I haven’t attempted
>>> >updating it outside of official ovirt updates.
>>> >
>>> >I’d like to see if I could improve it to handle my workloads better. I
>>> >also
>>> >understand that replication adds overhead.
>>> >
>>> >I do wonder how much difference in performance there would be with
>>> >replica
>>> >3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
>>> >perhaps not by a considerable difference.
>>> >
>>> >I will check into c states as well
>>> >
>>> >On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov 
>>> >wrote:
>>> >
>>> >> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme 
>>> >wrote:
>>> >> >Strahil,
>>> >> >
>>> >> >Thanks for your suggestions. The config is pretty standard HCI setup
>>> >> >with
>>> >> >cockpit and hosts are oVirt node. XFS was handled by the deployment
>>> >> >automatically. The gluster volumes were optimized for virt store.
>>> >> >
>>> >> >I tried noop on the SSDs, that made zero difference in the tests I
>>> >was
>>> >> >running above. I took a look at the random-io-profile and it looks
>>> >like
>>> >> >it
>>> >> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5
>>> >--
>>> >> >my
>>> >> >hosts alr

[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-08 Thread Jayme
Strahil,

I'm starting to think that my problem could be related to the use of perc
H310 mini raid controllers in my oVirt hosts. The os/boot SSDs are raid
mirror but gluster storage is SSDs in passthrough. I've read that the queue
depth of h310 card is very low and can cause performance issues
especially when used with flash devices.

dd if=/dev/zero of=test4.img bs=512 count=5000 oflag=dsync on one of my
hosts gluster bricks /gluster_bricks/brick_a for example takes 45 seconds
to complete.

I can perform the same operation in ~2 seconds on another server with a
better raid controller, but with the same model ssd.

I might look at seeing how I can swap out the h310's, unfortunately I think
that may require me to wipe the gluster storage drives as with another
controller I believe they'd need to be added as single raid 0 arrays and
would need to be rebuilt to do so.

If I were to take one host down at a time is there a way that I can
re-build the entire server including wiping the gluster disks and add the
host back into the ovirt cluster and rebuild it along with the bricks? How
would you recommend doing such a task if I needed to wipe gluster disks on
each host ?



On Sat, Mar 7, 2020 at 6:24 PM Jayme  wrote:

> No worries at all about the length of the email, the details are highly
> appreciated. You've given me lots to look into and consider.
>
>
>
> On Sat, Mar 7, 2020 at 10:02 AM Strahil Nikolov 
> wrote:
>
>> On March 7, 2020 1:12:58 PM GMT+02:00, Jayme  wrote:
>> >Thanks again for the info. You’re probably right about the testing
>> >method.
>> >Though the reason I’m down this path in the first place is because I’m
>> >seeing a problem in real world work loads. Many of my vms are used in
>> >development environments where working with small files is common such
>> >as
>> >npm installs working with large node_module folders, ci/cd doing lots
>> >of
>> >mixed operations io and compute.
>> >
>> >I started testing some of these things by comparing side to side with a
>> >vm
>> >using same specs only difference being gluster vs nfs storage. Nfs
>> >backed
>> >storage is performing about 3x better real world.
>> >
>> >Gluster version is stock that comes with 4.3.7. I haven’t attempted
>> >updating it outside of official ovirt updates.
>> >
>> >I’d like to see if I could improve it to handle my workloads better. I
>> >also
>> >understand that replication adds overhead.
>> >
>> >I do wonder how much difference in performance there would be with
>> >replica
>> >3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
>> >perhaps not by a considerable difference.
>> >
>> >I will check into c states as well
>> >
>> >On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov 
>> >wrote:
>> >
>> >> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme 
>> >wrote:
>> >> >Strahil,
>> >> >
>> >> >Thanks for your suggestions. The config is pretty standard HCI setup
>> >> >with
>> >> >cockpit and hosts are oVirt node. XFS was handled by the deployment
>> >> >automatically. The gluster volumes were optimized for virt store.
>> >> >
>> >> >I tried noop on the SSDs, that made zero difference in the tests I
>> >was
>> >> >running above. I took a look at the random-io-profile and it looks
>> >like
>> >> >it
>> >> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5
>> >--
>> >> >my
>> >> >hosts already appear to have those sysctl values, and by default are
>> >> >using virtual-host tuned profile.
>> >> >
>> >> >I'm curious what a test like "dd if=/dev/zero of=test2.img bs=512
>> >> >count=1000 oflag=dsync" on one of your VMs would show for results?
>> >> >
>> >> >I haven't done much with gluster profiling but will take a look and
>> >see
>> >> >if
>> >> >I can make sense of it. Otherwise, the setup is pretty stock oVirt
>> >HCI
>> >> >deployment with SSD backed storage and 10Gbe storage network.  I'm
>> >not
>> >> >coming anywhere close to maxing network throughput.
>> >> >
>> >> >The NFS export I was testing was an export from a local server
>> >> >exporting a
>> >> >single SSD (same type as in the oVirt hosts).
>> >> >
>> >> >I might end up switching storage to NFS and d

[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-07 Thread Jayme
No worries at all about the length of the email, the details are highly
appreciated. You've given me lots to look into and consider.



On Sat, Mar 7, 2020 at 10:02 AM Strahil Nikolov 
wrote:

> On March 7, 2020 1:12:58 PM GMT+02:00, Jayme  wrote:
> >Thanks again for the info. You’re probably right about the testing
> >method.
> >Though the reason I’m down this path in the first place is because I’m
> >seeing a problem in real world work loads. Many of my vms are used in
> >development environments where working with small files is common such
> >as
> >npm installs working with large node_module folders, ci/cd doing lots
> >of
> >mixed operations io and compute.
> >
> >I started testing some of these things by comparing side to side with a
> >vm
> >using same specs only difference being gluster vs nfs storage. Nfs
> >backed
> >storage is performing about 3x better real world.
> >
> >Gluster version is stock that comes with 4.3.7. I haven’t attempted
> >updating it outside of official ovirt updates.
> >
> >I’d like to see if I could improve it to handle my workloads better. I
> >also
> >understand that replication adds overhead.
> >
> >I do wonder how much difference in performance there would be with
> >replica
> >3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
> >perhaps not by a considerable difference.
> >
> >I will check into c states as well
> >
> >On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov 
> >wrote:
> >
> >> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme 
> >wrote:
> >> >Strahil,
> >> >
> >> >Thanks for your suggestions. The config is pretty standard HCI setup
> >> >with
> >> >cockpit and hosts are oVirt node. XFS was handled by the deployment
> >> >automatically. The gluster volumes were optimized for virt store.
> >> >
> >> >I tried noop on the SSDs, that made zero difference in the tests I
> >was
> >> >running above. I took a look at the random-io-profile and it looks
> >like
> >> >it
> >> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5
> >--
> >> >my
> >> >hosts already appear to have those sysctl values, and by default are
> >> >using virtual-host tuned profile.
> >> >
> >> >I'm curious what a test like "dd if=/dev/zero of=test2.img bs=512
> >> >count=1000 oflag=dsync" on one of your VMs would show for results?
> >> >
> >> >I haven't done much with gluster profiling but will take a look and
> >see
> >> >if
> >> >I can make sense of it. Otherwise, the setup is pretty stock oVirt
> >HCI
> >> >deployment with SSD backed storage and 10Gbe storage network.  I'm
> >not
> >> >coming anywhere close to maxing network throughput.
> >> >
> >> >The NFS export I was testing was an export from a local server
> >> >exporting a
> >> >single SSD (same type as in the oVirt hosts).
> >> >
> >> >I might end up switching storage to NFS and ditching gluster if
> >> >performance
> >> >is really this much better...
> >> >
> >> >
> >> >On Fri, Mar 6, 2020 at 5:06 PM Strahil Nikolov
> >
> >> >wrote:
> >> >
> >> >> On March 6, 2020 6:02:03 PM GMT+02:00, Jayme 
> >> >wrote:
> >> >> >I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD
> >> >> >disks).
> >> >> >Small file performance inner-vm is pretty terrible compared to a
> >> >> >similar
> >> >> >spec'ed VM using NFS mount (10GBe network, SSD disk)
> >> >> >
> >> >> >VM with gluster storage:
> >> >> >
> >> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >> >> >1000+0 records in
> >> >> >1000+0 records out
> >> >> >512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s
> >> >> >
> >> >> >VM with NFS:
> >> >> >
> >> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >> >> >1000+0 records in
> >> >> >1000+0 records out
> >> >> >512000 bytes (512 kB) copied, 2.20059 s, 233 kB/s
> >> >> >
> >> >> >This is a very big difference, 2 seconds to copy 1000 files on
> >NFS
>

[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-07 Thread Jayme
Thanks again for the info. You’re probably right about the testing method.
Though the reason I’m down this path in the first place is because I’m
seeing a problem in real world work loads. Many of my vms are used in
development environments where working with small files is common such as
npm installs working with large node_module folders, ci/cd doing lots of
mixed operations io and compute.

I started testing some of these things by comparing side to side with a vm
using same specs only difference being gluster vs nfs storage. Nfs backed
storage is performing about 3x better real world.

Gluster version is stock that comes with 4.3.7. I haven’t attempted
updating it outside of official ovirt updates.

I’d like to see if I could improve it to handle my workloads better. I also
understand that replication adds overhead.

I do wonder how much difference in performance there would be with replica
3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
perhaps not by a considerable difference.

I will check into c states as well

On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov 
wrote:

> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme  wrote:
> >Strahil,
> >
> >Thanks for your suggestions. The config is pretty standard HCI setup
> >with
> >cockpit and hosts are oVirt node. XFS was handled by the deployment
> >automatically. The gluster volumes were optimized for virt store.
> >
> >I tried noop on the SSDs, that made zero difference in the tests I was
> >running above. I took a look at the random-io-profile and it looks like
> >it
> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5 --
> >my
> >hosts already appear to have those sysctl values, and by default are
> >using virtual-host tuned profile.
> >
> >I'm curious what a test like "dd if=/dev/zero of=test2.img bs=512
> >count=1000 oflag=dsync" on one of your VMs would show for results?
> >
> >I haven't done much with gluster profiling but will take a look and see
> >if
> >I can make sense of it. Otherwise, the setup is pretty stock oVirt HCI
> >deployment with SSD backed storage and 10Gbe storage network.  I'm not
> >coming anywhere close to maxing network throughput.
> >
> >The NFS export I was testing was an export from a local server
> >exporting a
> >single SSD (same type as in the oVirt hosts).
> >
> >I might end up switching storage to NFS and ditching gluster if
> >performance
> >is really this much better...
> >
> >
> >On Fri, Mar 6, 2020 at 5:06 PM Strahil Nikolov 
> >wrote:
> >
> >> On March 6, 2020 6:02:03 PM GMT+02:00, Jayme 
> >wrote:
> >> >I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD
> >> >disks).
> >> >Small file performance inner-vm is pretty terrible compared to a
> >> >similar
> >> >spec'ed VM using NFS mount (10GBe network, SSD disk)
> >> >
> >> >VM with gluster storage:
> >> >
> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >> >1000+0 records in
> >> >1000+0 records out
> >> >512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s
> >> >
> >> >VM with NFS:
> >> >
> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >> >1000+0 records in
> >> >1000+0 records out
> >> >512000 bytes (512 kB) copied, 2.20059 s, 233 kB/s
> >> >
> >> >This is a very big difference, 2 seconds to copy 1000 files on NFS
> >VM
> >> >VS 53
> >> >seconds on the other.
> >> >
> >> >Aside from enabling libgfapi is there anything I can tune on the
> >> >gluster or
> >> >VM side to improve small file performance? I have seen some guides
> >by
> >> >Redhat in regards to small file performance but I'm not sure what/if
> >> >any of
> >> >it applies to oVirt's implementation of gluster in HCI.
> >>
> >> You can use the rhgs-random-io tuned  profile from
> >>
> >
> ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.4.2.0-1.el7rhgs.src.rpm
> >> and try with that on your hosts.
> >> In my case, I have  modified  it so it's a mixture between
> >rhgs-random-io
> >> and the profile for Virtualization Host.
> >>
> >> Also,ensure that your bricks are  using XFS with relatime/noatime
> >mount
> >> option and your scheduler for the SSDs is either  'noop' or 'none'
> >.The
> >> default  I/O scheduler for RHEL7 is deadline which is giving
&g

[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-06 Thread Jayme
Strahil,

Thanks for your suggestions. The config is pretty standard HCI setup with
cockpit and hosts are oVirt node. XFS was handled by the deployment
automatically. The gluster volumes were optimized for virt store.

I tried noop on the SSDs, that made zero difference in the tests I was
running above. I took a look at the random-io-profile and it looks like it
really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5 -- my
hosts already appear to have those sysctl values, and by default are
using virtual-host tuned profile.

I'm curious what a test like "dd if=/dev/zero of=test2.img bs=512
count=1000 oflag=dsync" on one of your VMs would show for results?

I haven't done much with gluster profiling but will take a look and see if
I can make sense of it. Otherwise, the setup is pretty stock oVirt HCI
deployment with SSD backed storage and 10Gbe storage network.  I'm not
coming anywhere close to maxing network throughput.

The NFS export I was testing was an export from a local server exporting a
single SSD (same type as in the oVirt hosts).

I might end up switching storage to NFS and ditching gluster if performance
is really this much better...


On Fri, Mar 6, 2020 at 5:06 PM Strahil Nikolov 
wrote:

> On March 6, 2020 6:02:03 PM GMT+02:00, Jayme  wrote:
> >I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD
> >disks).
> >Small file performance inner-vm is pretty terrible compared to a
> >similar
> >spec'ed VM using NFS mount (10GBe network, SSD disk)
> >
> >VM with gluster storage:
> >
> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >1000+0 records in
> >1000+0 records out
> >512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s
> >
> >VM with NFS:
> >
> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >1000+0 records in
> >1000+0 records out
> >512000 bytes (512 kB) copied, 2.20059 s, 233 kB/s
> >
> >This is a very big difference, 2 seconds to copy 1000 files on NFS VM
> >VS 53
> >seconds on the other.
> >
> >Aside from enabling libgfapi is there anything I can tune on the
> >gluster or
> >VM side to improve small file performance? I have seen some guides by
> >Redhat in regards to small file performance but I'm not sure what/if
> >any of
> >it applies to oVirt's implementation of gluster in HCI.
>
> You can use the rhgs-random-io tuned  profile from
> ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.4.2.0-1.el7rhgs.src.rpm
> and try with that on your hosts.
> In my case, I have  modified  it so it's a mixture between rhgs-random-io
> and the profile for Virtualization Host.
>
> Also,ensure that your bricks are  using XFS with relatime/noatime mount
> option and your scheduler for the SSDs is either  'noop' or 'none' .The
> default  I/O scheduler for RHEL7 is deadline which is giving preference to
> reads and  your  workload  is  definitely 'write'.
>
> Ensure that the virt settings are  enabled for your gluster volumes:
> 'gluster volume set  group virt'
>
> Also, are you running  on fully allocated disks for the VM or you started
> thin ?
> I'm asking as creation of new shards  at gluster  level is a slow task.
>
> Have you checked  gluster  profiling the volume?  It can clarify what is
> going on.
>
>
> Also are you comparing apples to apples ?
> For example, 1 ssd  mounted  and exported  as NFS and a replica 3 volume
> of the same type of ssd ? If not,  the NFS can have more iops due to
> multiple disks behind it, while Gluster has to write the same thing on all
> nodes.
>
> Best Regards,
> Strahil Nikolov
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2PR4JHLJFJQY3MMLKDBKSKALT2JX7KT5/


[ovirt-users] What if anything can be done to improve small file performance with gluster?

2020-03-06 Thread Jayme
I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD disks).
Small file performance inner-vm is pretty terrible compared to a similar
spec'ed VM using NFS mount (10GBe network, SSD disk)

VM with gluster storage:

# dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s

VM with NFS:

# dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 2.20059 s, 233 kB/s

This is a very big difference, 2 seconds to copy 1000 files on NFS VM VS 53
seconds on the other.

Aside from enabling libgfapi is there anything I can tune on the gluster or
VM side to improve small file performance? I have seen some guides by
Redhat in regards to small file performance but I'm not sure what/if any of
it applies to oVirt's implementation of gluster in HCI.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TIVAULOZPYFTQ3AYRX6KSRW6KUQKWXJ5/


[ovirt-users] Re: Hyperconverged setup questions

2020-02-29 Thread Jayme
I think you may be misinterpreting hci. Even though the hosts are used for
storage it’s still not technically local storage as the hosts are acting as
clients and are mounting the gluster storage.. so storage is still over
network.

You can get better gluster performance if you switch To libgfapi however
it’s not a default option and has some known issues. I also read in a it
report recently that redhat doesn’t plan to implement it because the
performance improvements aren’t great, although many ovirt users on this
group seem to have seen a performance boost after switching to libgfapi.

If io performance is your main concern I’m not sure if gluster is the
answer. There are likely some others on this group who have more real world
experience with various environments and could give you more comparisons.

On Sat, Feb 29, 2020 at 8:22 AM Vrgotic, Marko 
wrote:

> Hi Strahil , Leo and Jayme,
>
>
>
> This thread is getting more and more useful, great.
>
>
>
> Atm, I have 15 nodes cluster with shared Storage from Netapp. The storage
> network is (NFS4.1) on 20GB LACP, separated from control.
>
> Performance is generally great, except in several test cases when using
> "send next data after write confirm". This situation does not care about
> speed of network, kernel buffers or any other buffers, but only about
> storage server speed, and then we hit the speed issue.
>
>
>
> The main reason why I am asking for HCI, is to get as close as possible to
> Local Storage speed with multiple hosts in same cluster.
>
> The idea is to add HCI to current setup, as second cluster, utilizing CPU
> RAM and LocalStorage of joined nodes.
>
> --Is this actually a direction which will get me to the wanted result, or
> am I misunderstanding purpose of HCI?
>
>
>
> I understand that the HCI with SHE requires replica2+arbiter or replica3,
> but that is not my situation. I wish only to add HCI for reasons above.
>
> --Do I need the distributed-replicated in that case, or I can simply use
> distributed (if still supported) setup?
>
>
>
> Jayme, I do have resources to set this up in a staged environment, and I
> will be happy to share the info, but first I need to find out if I am at
> all moving in right direction.
>
>
>
> Kindly awaiting your reply.
>
>
>
>
>
> -
>
> kind regards/met vriendelijke groeten
>
> Marko Vrgotic
>
> Sr. System Engineer @ System Administration
>
>
>
> ActiveVideo
>
> e: m.vrgo...@activevideo.com
>
> w: www.activevideo.com <https://www.activevideo.com>
>
> ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217
> WJ Hilversum, The Netherlands. The information contained in this message
> may be legally privileged and confidential. It is intended to be read only
> by the individual or entity to whom it is addressed or by their designee.
> If the reader of this message is not the intended recipient, you are on
> notice that any distribution of this message, in any form, is strictly
> prohibited.  If you have received this message in error, please immediately
> notify the sender and/or ActiveVideo Networks, LLC by telephone at +1
> 408.931.9200 and delete or destroy any copy of this message.
>
>
>
>
>
> On 29/02/2020, 11:53, "Strahil Nikolov"  wrote:
>
>
>
> On February 29, 2020 11:19:30 AM GMT+02:00, Jayme 
> wrote:
>
> >I currently have a three host hci in rep 3 (no arbiter). 10gbe network
>
> >and
>
> >ssds making up the bricks. I’ve wondered what the result of adding
>
> >three
>
> >more nodes to expand hci would be. Is there an overall storage
>
> >performance
>
> >increase when gluster is expanded like this?
>
> >
>
> >On Sat, Feb 29, 2020 at 4:26 AM Leo David  wrote:
>
> >
>
> >> Hi,
>
> >> As a first setup, you can go with a 3 nodes HCI and having the data
>
> >volume
>
> >> in a replica 3 setup.
>
> >> Afterwards, if you want to expand HCI ( compute and storage too) you
>
> >can
>
> >> add sets of 3  nodes, and the data volume will automatically become
>
> >> replicated-distributed. Safely, you can add sets of 3 nodes up to 12
>
> >nodes
>
> >> per HCI.
>
> >> You can also add "compute only nodes" and not extending storage too.
>
> >This
>
> >> can be done by adding nodes one by one.
>
> >> As an example, I have an implementation where are 3 hyperconverged
>
> >nodes,
>
> >> they form a replica 3 volume, and later 

[ovirt-users] Re: Hyperconverged setup questions

2020-02-29 Thread Jayme
I currently have a three host hci in rep 3 (no arbiter). 10gbe network and
ssds making up the bricks. I’ve wondered what the result of adding three
more nodes to expand hci would be. Is there an overall storage performance
increase when gluster is expanded like this?

On Sat, Feb 29, 2020 at 4:26 AM Leo David  wrote:

> Hi,
> As a first setup, you can go with a 3 nodes HCI and having the data volume
> in a replica 3 setup.
> Afterwards, if you want to expand HCI ( compute and storage too) you can
> add sets of 3  nodes, and the data volume will automatically become
> replicated-distributed. Safely, you can add sets of 3 nodes up to 12 nodes
> per HCI.
> You can also add "compute only nodes" and not extending storage too. This
> can be done by adding nodes one by one.
> As an example, I have an implementation where are 3 hyperconverged nodes,
> they form a replica 3 volume, and later i have added the 4th node to the
> cluster which only adds ram and cpu, whilts consuming storage from the
> existing 3 nodes based volume.
> Hope this helps.
> Cheers,
>
> Leo
>
>
> On Fri, Feb 28, 2020, 15:25 Vrgotic, Marko 
> wrote:
>
>> Hi Strahil,
>>
>>
>>
>> I circled back on your reply while ago regarding oVirt Hyperconverged and
>> more than 3 nodes in cluster:
>>
>>
>>
>> “Hi Marko, I guess  you can use distributed-replicated volumes  and
>> oVirt  cluster with host triplets.”
>>
>> Initially I understood that its limited to 3Nodes max per HC cluster, but
>> now reading documentation further
>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Maintenance_and_Upgrading_Resources.html
>> that does not look like it.
>>
>>
>>
>> Would you be so kind to give me an example or clarify what you meant by “*you
>> can use distributed-replicated volumes  and oVirt  cluster with host
>> triplets.*” ?
>>
>>
>>
>> Kindly awaiting your reply.
>>
>>
>>
>>
>>
>> -
>>
>> kind regards/met vriendelijke groeten
>>
>>
>>
>> Marko Vrgotic
>> ActiveVideo
>>
>>
>>
>>
>>
>>
>>
>> *From: *"Vrgotic, Marko" 
>> *Date: *Friday, 11 October 2019 at 08:49
>> *To: *Strahil 
>> *Cc: *users 
>> *Subject: *Re: [ovirt-users] Hyperconverged setup questions
>>
>>
>>
>> Hi Strahil,
>>
>>
>>
>> Thank you.
>>
>> One maybe stupid question, but significant to me:
>>
>> Considering i haven’t been playing before with hyperconverged setup in
>> oVirt, is this something i can do from ui cockpit or does it require me
>> first setup an Glusterfs on the Hosts before doing anything via oVirt API
>> or Web interface?
>>
>>
>>
>> Kindly awaiting your reply.
>>
>>
>>
>> Marko
>>
>>
>>
>> Sent from my iPhone
>>
>>
>>
>> On 11 Oct 2019, at 06:14, Strahil  wrote:
>>
>> Hi Marko,
>>
>> I guess  you can use distributed-replicated volumes  and oVirt  cluster
>> with host triplets.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Oct 10, 2019 15:30, "Vrgotic, Marko" 
>> wrote:
>>
>> Dear oVirt,
>>
>>
>>
>> Is it possible to add oVirt 3Hosts/Gluster hyperconverged cluster to
>> existing oVirt setup? I need this to achieve Local storage performance, but
>> still have pool of Hypevisors available.
>>
>> Is it possible to have more than 3Hosts in Hyperconverged setup?
>>
>>
>>
>> I have currently 1Shared Cluster (NFS based storage, where also SHE is
>> hosted) and 2Local Storage clusters.
>>
>>
>>
>> oVirt current version running is 4.3.4.
>>
>>
>>
>> Kindly awaiting your reply.
>>
>>
>>
>>
>>
>> — — —
>> Met vriendelijke groet / Kind regards,
>>
>> *Marko Vrgotic*
>>
>> *ActiveVideo*
>>
>>
>>
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UH2FDN57V2TOQXD36UQXVTVCTB37O4OE/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJ5IAHCMNU3KSYUR3MCD2NNJTDEIHRNX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/APDEYVK7HLTMCVEWPQ26NECDQ2SMERCI/


[ovirt-users] Re: Hyperconverged setup questions

2020-02-28 Thread Jayme
Marko,

>From my understanding, you can have more than 3 hosts in a HCI cluster but
to expand HCI you need to add hosts in multiples of three.  I.e. go from 3
hosts to 6 or 9 etc.

You can still add hosts into the cluster as compute only hosts though.  So
you could have 3 hosts with gluster and a 4th that is just compute.

On Fri, Feb 28, 2020 at 9:24 AM Vrgotic, Marko 
wrote:

> Hi Strahil,
>
>
>
> I circled back on your reply while ago regarding oVirt Hyperconverged and
> more than 3 nodes in cluster:
>
>
>
> “Hi Marko, I guess  you can use distributed-replicated volumes  and
> oVirt  cluster with host triplets.”
>
> Initially I understood that its limited to 3Nodes max per HC cluster, but
> now reading documentation further
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Maintenance_and_Upgrading_Resources.html
> that does not look like it.
>
>
>
> Would you be so kind to give me an example or clarify what you meant by “*you
> can use distributed-replicated volumes  and oVirt  cluster with host
> triplets.*” ?
>
>
>
> Kindly awaiting your reply.
>
>
>
>
>
> -
>
> kind regards/met vriendelijke groeten
>
>
>
> Marko Vrgotic
> ActiveVideo
>
>
>
>
>
>
>
> *From: *"Vrgotic, Marko" 
> *Date: *Friday, 11 October 2019 at 08:49
> *To: *Strahil 
> *Cc: *users 
> *Subject: *Re: [ovirt-users] Hyperconverged setup questions
>
>
>
> Hi Strahil,
>
>
>
> Thank you.
>
> One maybe stupid question, but significant to me:
>
> Considering i haven’t been playing before with hyperconverged setup in
> oVirt, is this something i can do from ui cockpit or does it require me
> first setup an Glusterfs on the Hosts before doing anything via oVirt API
> or Web interface?
>
>
>
> Kindly awaiting your reply.
>
>
>
> Marko
>
>
>
> Sent from my iPhone
>
>
>
> On 11 Oct 2019, at 06:14, Strahil  wrote:
>
> Hi Marko,
>
> I guess  you can use distributed-replicated volumes  and oVirt  cluster
> with host triplets.
>
> Best Regards,
> Strahil Nikolov
>
> On Oct 10, 2019 15:30, "Vrgotic, Marko"  wrote:
>
> Dear oVirt,
>
>
>
> Is it possible to add oVirt 3Hosts/Gluster hyperconverged cluster to
> existing oVirt setup? I need this to achieve Local storage performance, but
> still have pool of Hypevisors available.
>
> Is it possible to have more than 3Hosts in Hyperconverged setup?
>
>
>
> I have currently 1Shared Cluster (NFS based storage, where also SHE is
> hosted) and 2Local Storage clusters.
>
>
>
> oVirt current version running is 4.3.4.
>
>
>
> Kindly awaiting your reply.
>
>
>
>
>
> — — —
> Met vriendelijke groet / Kind regards,
>
> *Marko Vrgotic*
>
> *ActiveVideo*
>
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UH2FDN57V2TOQXD36UQXVTVCTB37O4OE/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QATUNC7HC6ZV2CNCUVZVKYX2MEDUNQ7W/


[ovirt-users] Re: OVA import fails always

2020-02-27 Thread Jayme
If the problem is with the upload process specifically it’s likely that you
do not have the ovirt engine certificate installed in your browser.

On Thu, Feb 27, 2020 at 11:34 PM Juan Pablo Lorier 
wrote:

> Hi,
>
> I'm running 4.3.8.2-1.el7 (just updated engine to see if it helps) and I
> haven't been able to import vms in OVA format, I've tried many appliances
> downloaded from the web but couldn't get them to work.
>
> Any hints?
>
> Regards
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRCE36GYQIOCXYR6K3KWUJA6R4ODWU56/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VWU7EM2ZFX6STEJU67WQTIRJZWBBVWZG/


[ovirt-users] Re: oVirt Python SDK and monitor export as OVA task

2020-02-27 Thread Jayme
Gianluca,

This is not a direct solution to your problem, but for my project here:
https://github.com/silverorange/ovirt_ansible_backup was recently updated
to make use the ovirt_event_info Ansible module to determine the state of
the export. I'm not sure how to do the same in python directly, but I'm
sure it'd be possible. Unfortunately my solution is a full VM backup
including all disks which I understand is not your goal here.

I have used vProtect in the past to backup VMs and it has the ability to
specify which VM disks to include in the backup. That may be an option for
you to explore.

- Jayme

On Thu, Feb 27, 2020 at 11:33 AM Gianluca Cecchi 
wrote:

> Hello,
> sometimes I have environments (typically with Oracle RDBMS on virtual
> machines) where there is one boot disk and one (often big, such as 500Gb or
> more) data disk.
> The data disk has already its application backup (typically RMAN) and I
> want to complement it with a backup of only the boot disk of the operating
> system and VM configuration.
> I'm almost done completing a script using oVirt Python SDK and sharing for
> comments.
> I could be wrong, but I didn't find any ansible module  to do this with
> Ansible playbooks: I can only save all the VM, that in my case wouldn't be
> necessary and instead time and storage wasting.
> The basic steps are:
> - make a snapshot of the target vm, composed only by the boot disk
> - clone the snapshot
> - export the clone as OVA
> - delete clone
> - delete snapshot
>
> There are some things to discuss about, probably when I will share the
> overall job (such as the name to give to the boot disk, if not using the
> bootable flag, the way to import the OVA in case of actual need, where you
> will have to comment out some fs entries related to missing disks, ecc.).
> The only thing missing is how to monitor the export to ova task that, as
> with Ansible, it completes almost immediately, while the export is actually
> altready running.
> I need to monitor it, so only at its real end I can run the last 2 steps
> of the above list of tasks.
>
> Can you give me any hint? Not found very much in guide or docs on the web.
> I'm currently using a sleep because my boot disk is about 20Gb in size and
> I know that in less than 2 minutes it normally completes.
>
> The export as ova is very similar to what found in the examples and they
> don't contain a monitor for it but only the connection.close() call:
>
> cloned_vm_service.export_to_path_on_host(
> host=types.Host(id=host.id),
> directory = export_as_ova_dir,
> filename = cloned_vm.name + '.ova'
> )
>
> Thanks in advance,
> Gianluca
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/66AFZIU4GTQRJSZR5F5P2OR6ZS6IBDP7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RL3BU5JXG4IN4XRJU3XFRMVGQFK7VIFY/


[ovirt-users] Re: Ovirt API and CLI

2020-02-27 Thread Jayme
Echoing what others have said. Ansible is your best option here.

On Thu, Feb 27, 2020 at 7:22 AM Nathanaël Blanchet  wrote:

>
> Le 27/02/2020 à 11:00, Yedidyah Bar David a écrit :
>
> On Thu, Feb 27, 2020 at 11:53 AM Eugène Ngontang  
>  wrote:
>
> Yes Ansible ovirt_vms module is useful, I use it for provisioning/deployment, 
> but once my VM created, I'd like to administrate/interact with them, I don't 
> think I should write playbooks for that.
>
> Why not? You're the next devops :)
>
> I was used to use ovirt-shell (removed from 4.4), and instead of it I
> control now all my vms with ansible playbooks:
>
>- consultation with ovirt-*_inf with appropriate filters (combine ,
>dict2items) and conditions (when, until)
>- interaction with other modules (with present/absent statement for
>all parameters)
>
> I precise I am not a developer but once I took the habit with a proper
> environment (venv, IDE, loops, structured playbook and roles, dict struct,
> etc..), I was able do what I want, or rather what the API let me do.
>
> Before begining, I should advice you to take the time to study the
> structure of the output of the registered variable
>
> Here is a piece of my commonly used playbooks to check status of wanted
> vms:
> - name: template ovirt pour tester les modules
> hosts: localhost
> connection: local
> tasks:
> - block:
> - include: ovirt_auth.yaml
> tags: auth,change
> - name: vm facts
> ovirt_vm_info:
> auth: "{{ ovirt_auth }}"
> pattern: "name=vm5 or name=vm8"
> register: vm_info
> - debug: var=vm_info.ovirt_vms
> # msg: "{{vm_info.ovirt_vms | map(attribute='status')|list}}"
> - name: "Génération d'un dictionnaire avec combine"
> set_fact:
> vm_status: "{{ vm_status|default({})|combine({item.name: item.status}) }}"
> loop: "{{vm_info.ovirt_vms}}"
> when: item.status == "up"
> - debug:
> msg: "{{vm_status}}"
> always:
> - include: ovirt_auth_revoke.yaml
> tags: auth,change
>
> Good luck!
>
> This is up to you, of course.
>
> For a project that uses heavily the ansible modules, see
> ovirt-ansible-hosted-engine-setup.
>
> For one that uses the python SDK, see ovirt-system-tests. The SDK
> itself also has a very useful collection of examples.
>
>
> But I'll find a solution.
>
> Good luck and best regards,
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> SIRE227 avenue Professeur-Jean-Louis-Viala 
> 
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LDCRW5YXHEMEY77XTHQKV4CAHHUKF43E/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ICVGFR5LJH7HFYY6S7TMS3GP4GPIPQMD/


[ovirt-users] oVirt ansible backup improvements

2020-02-25 Thread Jayme
If anyone has been following along, I had previously shared a blog post and
GitHub repo regarding my unofficial solution for backing up oVirt VMs using
Ansible.

Martin Necas reached out to me and we collaborated on some great
improvements. Namely, it is now possible to run the playbook from any host
without requiring direct access to storage (which I was previously using
for export status verification). There were several other improvements and
cleanups made as well.

The changes have been merged in and the READMME updated, you can find the
project here: https://github.com/silverorange/ovirt_ansible_backup

Big thanks to Martin for helping out. Very much appreciated!

- Jayme
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JNSY6GYNS6LPNUJXERUO2EOG5F3P7B2M/


  1   2   3   4   >