[ovirt-users] Re: Should I migrate existing oVirt Engine, or deploy new?

2022-09-17 Thread David White via Users
I thought I'd report back to the list and mention that I was successful in 
migrating off of the hyperconverged environment onto a stand-alone engine 
environment, and Gluster has been removed from the oVirt configuration.

I ran into a few minor hiccups, all of which were resolved fairly easily, and I 
took notes. I intend to submit a PR to the github documentation, since none 
currently exist for migrating the engine off of a hyperconverged environment.

My only remaining questions at this point is:

-   Are there things on the hosts themselves that I should cleanup? I noticed 
that the "hosted-engine" command still exists. I went to run a `yum remove` on 
that, and it tried to remove basically everything... so I figured that wasn't 
actually a good idea.


-   Do I need to do anything in the oVirt config (maybe something in the 
Postgres database) to basically tell it that it is no longer self-hosted, but 
is instead stand-alone?



Sent with Proton Mail secure email.

--- Original Message ---
On Friday, August 19th, 2022 at 11:01 AM, David White via Users 
 wrote:


> Hi Paul,
> Thanks for the response.
> 

> I think you're suggesting that I take a hybrid approach, and do a restore of 
> the current Engine onto the new VM. I hadn't thought about this option.
> 

> Essentially what I was considering was either:
> 

> -   Export to OVA or something
> OR
> -   Build a completely new oVirt engine with a completely new domain, etc... 
> and try to live migrate the VMs from the old engine to the new engine.
> 

> 

> Do I understand you correctly that you're suggesting I install the OS onto a 
> new VM, and try to do a restore of the oVirt settings onto the new VM (after 
> I put the cluster into Global maintenance mode and shutdown the old oVirt)?
> 

> 

> Sent with Proton Mail secure email.
> 

> --- Original Message ---
> On Friday, August 19th, 2022 at 10:46 AM, Staniforth, Paul 
>  wrote:
> 

> 

> > Hello David,
> >   I don't think there's a documentated method to go 
> > from a Hosted Engine to standalone just the other way standalone to HE.
> > 

> > I would suggest doing a full backup of the engine prepare the new VM and 
> > restore to that rather than trying to export it.
> > This way you can shut down the original engine and run the new engine VM to 
> > test it works as you will be able to restart the original engine if it 
> > doesn't work.
> > 

> > Regards,
> >     Paul S.
> > 

> > 

> > 

> > 

> > 

> > From: David White via Users 
> > Sent: 19 August 2022 15:27
> > To: David White 
> > Cc: oVirt Users 
> > Subject: [ovirt-users] Re: Should I migrate existing oVirt Engine, or 
> > deploy new?
> > 

> > Caution External Mail: Do not click any links or open any attachments 
> > unless you trust the sender and know that the content is safe.
> > 

> > In other words, I want to migrate the Engine from a hyperconverged 
> > environment into a stand-alone setup.
> > 

> > 

> > Sent with Proton Mail secure email.
> > 

> > --- Original Message ---
> > On Friday, August 19th, 2022 at 10:17 AM, David White via Users 
> >  wrote:
> > 

> > 

> > > Hello,
> > > I have just purchased a Synology SA3400 which I plan to use for my oVirt 
> > > storage domain(s) going forward. I'm currently using Gluster storage in a 
> > > hyperconverged environment.
> > > 

> > > My goal now is to:
> > > 

> > > -   Use the Synology Virtual Machine manager to host the oVirt Engine on 
> > > the Synology
> > > -   Setup NFS storage on the Synology as the storage domain for all VMs 
> > > in our environment
> > > -   Migrate all VM storage onto the new NFS domain
> > > -   Get rid of Gluster
> > > 

> > > 

> > > My first step is to migrate the oVirt Engine off of Gluster storage / off 
> > > the Hyperconverged hosts into the Synology Virtual Machine manager. 
> > > 

> > > Is it possible to migrate the existing oVirt Engine (put the cluster into 
> > > Global Maintenance Mode, shutdown oVirt, export to VDI or something, and 
> > > then import into Synology's virtualization)? Or would it be better for me 
> > > to install a completely new Engine, and then somehow migrate all of the 
> > > VMs from the old engine into the new engine?
> > > 

> > > Thanks,
> > > David
> > > 

> > > 

> > > Sent with Proton Mail secure email.
> > 

> > To view the terms under which this email is distributed, please go to:-
> > https://leedsbeckett.ac.uk/disclaimer/email

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-17 Thread Diego Ercolani
Parameter cluster.choose-local set to off.
I confirm the filesystem of the bricks are all XFS as required.
I started the farm only to accomplish a test bench of oVirt implementation, so 
I used 3 hosts based on ryzen5 processor desktop environment equipped with 4 
DDR (4 32GB modules) and 1 disk for the OS and the others to use as data brick 
or nfs target. All SATA based, while the OS is installed on an internal M.2 
disk.
The node4 as it doesn't require much space as it is the arbiter, uses only the 
internal M.2 disk.
Every host is equipped with dual channel x520 intel chipset with 2 SFP+ 
configured with 9000 packet size. Access lan is the management lan (and even 
the lan used by gluster) the VLAN are the "production" vlans.

node2:
/dev/mapper/glustervg-glhe on /brickhe type xfs 
(rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/glustervg-gv0 on /brickgv0 type xfs 
(rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/glustervg-gv1 on /brickgv1 type xfs 
(rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)

  Devices file mpath_uuid part4-mpath-Samsung_SSD_860_EVO_500GB_S4XBNF0N334942Y 
PVID 9iyl5761LWcy3AYy36fNcPk0fADjNYtC last seen on 
/dev/mapper/Samsung_SSD_860_EVO_500GB_S4XBNF0N334942Y4 not found.
  PV /dev/mapper/Samsung_SSD_870_EVO_4TB_S6BCNG0R300064E   VG glustervg 
lvm2 [<3.64 TiB / 1.54 TiB free]
  PV /dev/nvme0n1p4VG glustervg 
lvm2 [<287.02 GiB / <287.02 GiB free]
  PV /dev/nvme0n1p3VG onn_ovirt-node2   
lvm2 [177.15 GiB / <33.71 GiB free]
  PV /dev/mapper/ST4000NM000A-2HZ100_WJG1ZC85  VG daticold  
lvm2 [<3.64 TiB / 2.44 TiB free]

node3:
/dev/mapper/glustervg-glhe on /brickhe type xfs 
(rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/glustervg-gv1 on /brickgv1 type xfs 
(rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/glustervg-gv0 on /brickgv0 type xfs 
(rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)

[root@ovirt-node3 ~]# pvscan -v
  PV /dev/sda VG glustervg lvm2 [<3.64 TiB / <1.64 TiB free]
  PV /dev/nvme0n1p4   VG glustervg lvm2 [<287.02 GiB / <187.02 GiB free]
  PV /dev/nvme0n1p3   VG onn_ovirt-node3   lvm2 [177.15 GiB / <33.71 GiB free]
  Total: 3 [4.09 TiB] / in use: 3 [4.09 TiB] / in no VG: 0 [0   ]

node4:
/dev/mapper/onn_ovirt--node4-gluster on /dati type xfs 
(rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PO7D7UJUAZV2KAD2KNFZY3TTZNDUXLSL/


[ovirt-users] Re: Error during deployment of ovirt-engine

2022-09-17 Thread jonas
I meant what exactly is broken and if there is a workaround or a planned fix.

I performed a deployment using hosted-engine. This goes on step further, but 
fails while waiting for the host:
[...]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using 
username/password credentials]
[ INFO  ] ok: [localhost]   
 
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the host to be up]   

[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a 
failure]  
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Host is not 
up, please check logs, perhaps also o
n the engine machine"}
[...]

On the hosted-engine, I see in 
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20220917140938-server-005.admin.int.rabe.ch-937b3c69-a4d5-45c7-af0f-b8705490ff2a.log:
[...]
2022-09-17 14:12:30 CEST - {
  "uuid" : "97891064-a215-4709-b944-ceba2d13b19f",
  "counter" : 391,
  "stdout" : "fatal: [server-005.admin.int.rabe.ch]: FAILED! => {\"msg\": \"The 
conditional check 'cluster_switch == \\\"ovs\\\" or (ovn_central is defined and 
ovn_central | ipaddr)' failed. The error was: The ipaddr filter requires 
python's netaddr be installed on the ansible controller\\n\\nThe error appears 
to be in 
'/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-provider-ovn-driver/tasks/configure.yml':
 line 3, column 5, but may\\nbe elsewhere in the file depending on the exact 
syntax problem.\\n\\nThe offending line appears to be:\\n\\n- block:\\n  - 
name: Install ovs\\n^ here\\n\"}",
[...]

But netaddr is installed:
[root@ovirt-engine-test host-deploy]# pip3 install netaddr
WARNING: Running pip install with root privileges is generally not a good idea. 
Try `pip3 install --user` instead.
Requirement already satisfied: netaddr in /usr/lib/python3.6/site-packages

Any ideas what I can do?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3U2GUSWDEMD77ASPJ3QO7HNFDX3FB5MQ/


[ovirt-users] Migrating VM from old ovirt to a new one

2022-09-17 Thread Facundo Badaracco
Hi everyone

I would like to ask you a question

I have an ovirt 4.4 which got corrupted certs and I can't access the gui.
It gives me error 500 internal when I want to access.

I have a new ovirt 4.5 on glusterfs which I would like to migrate all my
vm's to.

Is this possible? Can it be done by cli? It would save me from having to
reinstall all the vm's.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/556KSQG5K6CYDTWJGU5NET3UO34YYFXM/


[ovirt-users] Re: Gluster setup for oVirt

2022-09-17 Thread jonas
I don't have that anymore, but I assume that gluster_infra_lv_logicalvols 
requires a thin pool: 
https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/thin_volume_create.yml
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQ4Q4SELENO6EMF4WUQKM27G55RPEM3O/