[ovirt-users] Re: Upgrade hyperconverged self-hosted ovirt from 4.3 to 4.4

2020-11-20 Thread ralf
Hi,
I today tested the steps. Actually it worked but I needed to add a few things.

0. Gluster snapshots on all volumes 
I did not need that.
1. Set a node in maintenance
2. Create a full backup of the engine
3. Set global maintenance and power off the current engine
4. Backup all gluster config files
Backup /etc/glusterfs/ /var/lib/glusterd /etc/fstab and the directories in 
/gluster_bricks

5. Reinstall the node that was set to maintnenance (step 2)
6. Install glusterfs, restore the configs from step 4
Modify the /etc/fstab, create directories in /gluster_bricks, mount the 
bricks 
I had to remove the lvm_filter to scan the logical volume group used for 
the bricks.
7. Restart glusterd and check that all bricks are up
I had to force the volumes to start
8. Wait for healing to end
9. Deploy the new HE on a new Gluster Volume, using the backup/restore 
procedure for HE
I could create a new gluster volume in the existing thin pool. During the 
deployment I specified the gluster volume:
station1:/engine-new
I used the options backvolfile-server=station2:station3

10.Add the other nodes from the oVirt cluster 
 Actually I did not need to add the nodes. They were directly available in 
the engine. 
11.Set EL7-based hosts to maintenance and power off
 My setup is based on ovirt Nodes. I put the first node in maintenance, 
created a backup of the gluster configuration and installed manually the 
ovirt-node 4.4. I reinstalled the gluster setup and waited for the gluster to 
heal.
 I then copied the ssh-key from a different node and reinstalled the node 
via the webinterface. 
 I manually set the hosted-engine option to deploy during reinstallation.
 I repeated these steps for all hosts.
12. I put the old engine gluster domain in maintenance detached and remove the 
domain. 
 Then I could remove the old storage volume as well.

In the end I was running 4.4, migrations work.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MQKE2VQU6EP42ZDAA2CUWSGJM3MXYLFV/


[ovirt-users] Re: Upgrade hyperconverged self-hosted ovirt from 4.3 to 4.4

2020-11-10 Thread ralf
Thanks a lot for the suggestions. I will try to follow your routine on a test 
setup next week.
Kind regards,

Ralf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SO42UHYYO4EZZJCUO7M34EBQRKHLLXG2/


[ovirt-users] Re: Upgrade hyperconverged self-hosted ovirt from 4.3 to 4.4

2020-11-09 Thread Strahil Nikolov via Users


Hi,
I am interested about these steps too, for a clean an straighforward procedure.
Althought this plan looks pretty good, i am still wondering:

Step 4
>Backup all gluster config files
- could you please let me know what would be the exact location(s) of the files 
to be backed up ?

/etc/glusterfs/
/var/lib/glusterd/


Step 6
>Install glusterfs, restore the configs from step 4
>- would the configs work with this version ?
>- would in theory gluster get back to a previous balanced state ?

It's like upgrading Glusterfs locally - you update the rpm, but configs remain 
the same.In my case, I'm already on Gluster v7 - so it should be unnoticed. 
Even going from v6 to v7 - it should not cause any issues, yet I would place 
the backups before installing glusterfs at all.
The idea is to avoid removing and later readding the brick which will lead to a 
lot of healing.

Step 9

>Deploy the new HE on a new Gluster Volume, using the backup/restore procedure 
>for HE.
- this assumes to firstly create a new volume based on some aditional new disks 
or lvms, right ?
Yep, you want to keep the old volume , just in case you want to revert. In case 
a revert is necessary you have 2 options:
1) If the new engine has managed to power up and somehow (it should be done 
only after the whole cluster is upgraded and cluster version is upped) changed 
the Storage domain version -> you have to revert all snapshots which can lead 
to short dataloss but faster recovery time. 
2) If the engine didn't power up at all, just kill the EL8 host (to ensure the 
new engine is not up) and just remove the global HA from one of the old hosts - 
the old engine will power up and then you have to reinstall the host that was 
used for EL8 fiasco :)
> 
> I haven't done it yet, but I'm planing to do it.
> As I haven't tested the following, I can't guarantee that it will work:
> 0. Gluster snapshots on all volumes 
> 1. Set a node in maintenance
> 2. Create a full backup of the engine
> 3. Set global maintenance and power off the current engine
> 4. Backup all gluster config files
> 5. Reinstall the node that was set to maintnenance (step 2)
> 6. Install glusterfs, restore the configs from step 4
> 7. Restart glusterd and check that all bricks are up
> 8. Wait for healing to end
> 9. Deploy the new HE on a new Gluster Volume, using the backup/restore 
> procedure for HE
> 10.Add the other nodes from the oVirt cluster
> 11.Set EL7-based hosts to maintenance and power off
> 12.Repeat steps 4-8 for the second host (step 11)
> ...
> In the end, you can bring the CLuster Level up to 4.4 and enjoy...
> 

Of course , you have the option to create a new setup and migrate the VMs one 
by one (when downtime allows you) from the 4.3 setup to 4.4 setup.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U25PNTNYCGSYGPQJNBD5K7QS3GV3MWRV/


[ovirt-users] Re: Upgrade hyperconverged self-hosted ovirt from 4.3 to 4.4

2020-11-09 Thread Leo David
Hi,
I am interested about these steps too, for a clean an straighforward
procedure.
Althought this plan looks pretty good, i am still wondering:

Step 4
Backup all gluster config files
- could you please let me know what would be the exact location(s) of the
files to be backed up ?

Step 6
Install glusterfs, restore the configs from step 4
- would the configs work with this version ?
- would in theory gluster get back to a previous balanced state ?

Step 9

Deploy the new HE on a new Gluster Volume, using the backup/restore
procedure for HE.
- this assumes to firstly create a new volume based on some aditional new
disks or lvms, right ?

Sorry if maybe I'm missing somethings due to my lack of knowledge.
Cheers,

Leo

On Mon, Nov 9, 2020, 17:40 Strahil Nikolov via Users 
wrote:

> Hi ,
>
> I haven't done it yet, but I'm planing to do it.
> As I haven't tested the following, I can't guarantee that it will work:
> 0. Gluster snapshots on all volumes
> 1. Set a node in maintenance
> 2. Create a full backup of the engine
> 3. Set global maintenance and power off the current engine
> 4. Backup all gluster config files
> 5. Reinstall the node that was set to maintnenance (step 2)
> 6. Install glusterfs, restore the configs from step 4
> 7. Restart glusterd and check that all bricks are up
> 8. Wait for healing to end
> 9. Deploy the new HE on a new Gluster Volume, using the backup/restore
> procedure for HE
> 10.Add the other nodes from the oVirt cluster
> 11.Set EL7-based hosts to maintenance and power off
> 12.Repeat steps 4-8 for the second host (step 11)
> ...
> In the end, you can bring the CLuster Level up to 4.4 and enjoy...
>
>
> Yet, this is just theory :)
>
> Best Regards,
> Strahil Nikolov
>
> Keep in mind that gluster snapshot feature allows you to revert
>
>
>
>
>
>
> В понеделник, 9 ноември 2020 г., 08:19:23 Гринуич+2, 
> написа:
>
>
>
>
>
> Hi,
> has anyone attempted an upgrade from 4.3 to 4.4 in a hyperconverged
> self-hosted setup?
> The posted guidelines seem a bit contradictive and not complete.
> Has anyone tried it and could share his experiences? I am currently having
> problems when deploying the hosted engine and restoring. The host becomes
> unresponsive and has hung tasks.
>
> Kind regards,
>
> Ralf
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R6C5DAKNT4ZA42FLC2YGYYUNQLXXHHHZ/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCD77H5RWGBR6GFEIFKJWJPG7MQVT6M2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q62ZG7JLWP4XYQXLZGEESLQQCSIRVVR7/


[ovirt-users] Re: Upgrade hyperconverged self-hosted ovirt from 4.3 to 4.4

2020-11-09 Thread Strahil Nikolov via Users
Hi ,

I haven't done it yet, but I'm planing to do it.
As I haven't tested the following, I can't guarantee that it will work:
0. Gluster snapshots on all volumes 
1. Set a node in maintenance
2. Create a full backup of the engine
3. Set global maintenance and power off the current engine
4. Backup all gluster config files
5. Reinstall the node that was set to maintnenance (step 2)
6. Install glusterfs, restore the configs from step 4
7. Restart glusterd and check that all bricks are up
8. Wait for healing to end
9. Deploy the new HE on a new Gluster Volume, using the backup/restore 
procedure for HE
10.Add the other nodes from the oVirt cluster
11.Set EL7-based hosts to maintenance and power off
12.Repeat steps 4-8 for the second host (step 11)
...
In the end, you can bring the CLuster Level up to 4.4 and enjoy...


Yet, this is just theory :)

Best Regards,
Strahil Nikolov

Keep in mind that gluster snapshot feature allows you to revert 






В понеделник, 9 ноември 2020 г., 08:19:23 Гринуич+2,  написа: 





Hi,
has anyone attempted an upgrade from 4.3 to 4.4 in a hyperconverged self-hosted 
setup?
The posted guidelines seem a bit contradictive and not complete. 
Has anyone tried it and could share his experiences? I am currently having 
problems when deploying the hosted engine and restoring. The host becomes 
unresponsive and has hung tasks.

Kind regards,

Ralf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R6C5DAKNT4ZA42FLC2YGYYUNQLXXHHHZ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCD77H5RWGBR6GFEIFKJWJPG7MQVT6M2/


[ovirt-users] Re: Upgrade hyperconverged self-hosted ovirt from 4.3 to 4.4

2020-11-09 Thread ralf
I have managed to work around the hanging host by reducing the memory of the 
hosted engine during the deploy. But unfortunately the deploy still fails.

There is no real error message in the deployment log:
2020-11-09 08:58:55,337+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.hosted_engine_setup : Wait for 
the host to be up]
2020-11-09 09:02:24,776+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 ok: [localhost]
2020-11-09 09:02:26,380+0100 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
TASK [ovirt.hosted_engine_setup : debug]
2020-11-09 09:02:27,883+0100 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
host_result_up_check: {'changed': False, 'ovirt_hosts': [{'href': 
'/ovirt-engine/api/hosts/9e504890-bcb8-40b1-813f-ee123547b3f9', 'comment': '', 
'id': '9e504890-bcb8-40b1-813f-ee123547b3f9', 'name': 'station5.example.com', 
'address': 'station5.example.com', 'affinity_labels': [], 'auto_numa_status': 
'unknown', 'certificate': {'organization': 'example.com', 'subject': 
'O=example.com,CN=station5.example.com'}, 'cluster': {'href': 
'/ovirt-engine/api/clusters/1e67ce6a-2011-11eb-8029-00163e28a2ed', 'id': 
'1e67ce6a-2011-11eb-8029-00163e28a2ed'}, 'cpu': {'name': 'Intel(R) Core(TM) 
i5-3470 CPU @ 3.20GHz', 'speed': 3554.0, 'topology': {'cores': 4, 'sockets': 1, 
'threads': 1}, 'type': 'Intel SandyBridge IBRS SSBD MDS Family'}, 
'device_passthrough': {'enabled': False}, 'devices': [], 
'external_network_provider_configurations': [], 'external_status': 'ok', 
'hardware_information': {'
 family': '103C_53307F G=D', 'manufacturer': 'Hewlett-Packard', 'product_name': 
'HP Compaq Pro 6300 SFF', 'serial_number': 'CZC41045SC', 
'supported_rng_sources': ['random', 'hwrng'], 'uuid': 
'F748FD00-9E43-11E3-9BDA-A0481C87CA32', 'version': ''}, 'hooks': [], 'iscsi': 
{'initiator': 'iqn.1994-05.com.redhat:a668abd829a1'}, 'katello_errata': [], 
'kdump_status': 'disabled', 'ksm': {'enabled': False}, 'libvirt_version': 
{'build': 0, 'full_version': 'libvirt-6.0.0-25.2.el8', 'major': 6, 'minor': 0, 
'revision': 0}, 'max_scheduling_memory': 16176381952, 'memory': 16512974848, 
'network_attachments': [], 'nics': [], 'numa_nodes': [], 'numa_supported': 
False, 'os': {'custom_kernel_cmdline': '', 'reported_kernel_cmdline': 
'BOOT_IMAGE=(hd0,msdos1)//ovirt-node-ng-4.4.2-0.20200918.0+1/vmlinuz-4.18.0-193.19.1.el8_2.x86_64
 crashkernel=auto resume=/dev/mapper/onn-swap 
rd.lvm.lv=onn/ovirt-node-ng-4.4.2-0.20200918.0+1 rd.lvm.lv=onn/swap rhgb quiet 
root=/dev/onn/ovirt-node-ng-4.4.2-0.20200918.0+1 boot=UU
 ID=78682d93-a122-4ea9-8593-224fa32b7ab4 rootflags=discard 
img.bootid=ovirt-node-ng-4.4.2-0.20200918.0+1', 'type': 'RHEL', 'version': 
{'full_version': '8 - 2.2004.0.1.el8', 'major': 8}}, 'permissions': [], 'port': 
54321, 'power_management': {'automatic_pm_enabled': True, 'enabled': False, 
'kdump_detection': True, 'pm_proxies': []}, 'protocol': 'stomp', 'se_linux': 
{'mode': 'enforcing'}, 'spm': {'priority': 5, 'status': 'none'}, 'ssh': 
{'fingerprint': 'SHA256:pUi4oFo/5DGLYbWN39rEiap3bUfVK1C/6OPEecf8GFg', 'port': 
22}, 'statistics': [], 'status': 'non_operational', 'status_detail': 
'storage_domain_unreachable', 'storage_connection_extensions': [], 'summary': 
{'active': 1, 'migrating': 0, 'total': 1}, 'tags': [], 
'transparent_huge_pages': {'enabled': True}, 'type': 'rhel', 
'unmanaged_networks': [], 'update_available': False, 'version': {'build': 26, 
'full_version': 'vdsm-4.40.26.3-1.el8', 'major': 4, 'minor': 40, 'revision': 
3}, 'vgpu_placement': 'consolidated'}], 'failed': False, 'attem
 pts': 21}
2020-11-09 09:02:29,386+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.hosted_engine_setup : Notify the 
user about a failure]
2020-11-09 09:02:30,891+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 skipping: [localhost]
2020-11-09 09:02:32,395+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.hosted_engine_setup : set_fact]
2020-11-09 09:02:33,800+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 ok: [localhost]
2020-11-09 09:02:35,404+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.hosted_engine_setup : Collect 
error events from the Engine]
2020-11-09 09:02:37,410+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 ok: [localhost]
2020-11-09 09:02:39,114+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.hosted_engine_setup : Generate 
the error message from the engine events]
2020-11-09 09:02:40,819+0100 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': "The task includes