[ovirt-users] Re: hosted-engine --deploy --restore-from-file fails on oVirt node 4.5.1.3

2022-08-06 Thread P F
I'm unable to recreate the original problem.

The good news is, the process moves past the engine_setup now.
The ovirt-engine server actually starts, and is exposed on 
https://:6900/ovirt-engine

The bad news is, when I try to access the engine Web UI at that URL, I get a 
'500 Internal Server Error'.
I don't see any obvious errors in the log files in /var/log/ovirt-engine

I'm able to access the URL https://:6900/ovirt-engine
However, as soon as I click the "Administration Portal" link on the main page, 
I see the '500 Internal Server Error'

I do notice the following error in /var/log/httpd/ssl_error_log;

[Sat Aug 06 18:45:32.106641 2022] [auth_openidc:error] [pid 1648:tid 
139896547178240] [client 192.168.222.3:58098] oidc_authenticate_user: the URL 
hostname (ovirt-engine.internal.net) of the configured OIDCRedirectURI does not 
match the URL hostname of the URL being accessed (ovirt-node04.internal.net): 
the "state" and "session" cookies will not be shared between the two!, referer: 
https://ovirt-node04.internal.net:6900/ovirt-engine/

The error above would suggest that it will not be possible to access the engine 
Web UI which is temporarily exposed on port 6900.
How has this ever been possible in the past?
What do I need to do in order to access the engine Web UI, since I need to 
configure the hosts's network to include several VLANs necessary to complete 
the restore of the engine DB?

Prior to the above...
I took a fresh backup of the ovirt engine.
Set up a fresh/new ovirt node.
Copied the backup to the new node.
I ran the 'hosted-engine --deploy --restore-from-file=' command.
Below are the complete messages of that attempt, up to the point I get the '500 
Internal Server' error in the browser.

--- snip ---
[root@ovirt-node04 ~]# hosted-engine --deploy --restore-from-file=20220806-all
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
  During customization use CTRL-D to abort.
  Continuing will configure this host for serving as hypervisor and 
will create a local VM with a running engine.
  The provided engine backup file will be restored there,
  it's strongly recommended to run this tool on an host that wasn't 
part of the environment going to be restored.
  If a reference to this host is already contained in the backup file, 
it will be filtered out at restore time.
  The locally running engine will be used to configure a new storage 
domain and create a VM there.
  At the end the disk of the local VM will be moved to the shared 
storage.
  The old hosted-engine storage domain will be renamed, after checking 
that everything is correctly working you can manually remove it.
  Other hosted-engine hosts have to be reinstalled from the engine to 
update their hosted-engine configuration.
  Are you sure you want to continue? (Yes, No)[Yes]: 
  It has been detected that this program is executed through an SSH 
connection without using tmux.
  Continuing with the installation may lead to broken installation if 
the network connection fails.
  It is highly recommended to abort the installation and run it inside 
a tmux session using command "tmux".
  Do you want to continue anyway? (Yes, No)[No]: Yes
  Configuration files: 
  Log file: 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20220806160814-wehm0s.log
  Version: otopi-1.10.0 (otopi-1.10.0-1.el8)
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup (late)
[ INFO  ] Stage: Environment customization
 
  --== STORAGE CONFIGURATION ==--
 
 
  --== HOST NETWORK CONFIGURATION ==--
 
[ INFO  ] Bridge ovirtmgmt already created
  Please indicate the gateway IP address [192.168.1.1]: 
[ INFO  ] Checking available network interfaces:
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set 
of steps]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Detecting interface on 
existing management bridge]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Set variable for supported 
bond modes]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get all active network 
interfaces]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Filter bonds with bad naming]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Generate output list]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Collect interface types]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check for Team devices]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get list of Team devices]

[ovirt-users] Re: Gluster network - associate brick

2022-08-06 Thread Strahil Nikolov via Users
If you wish the Gluster traffic to be over the 172.16.20.X/24, you will have to 
change the bricks in the volume to 172.16.20.X:/gluster_bricks/vmstore/vmstore
The simplest way is to:gluster volume remove-brick VOLUMENAME replica 2 
node3.mydomain.lab:/gluster_bricks/data/data force
# node3umount /gluster_bricks/datamkfs.xfs -f -i size=512 
/dev/GLUSTER_VG/GLUSTER_LVmount /gluster_bricks/datamkdir 
/gluster_bricks/data/datachown 36:36 -R  /gluster_bricks/data/datarestorecon 
-RFvv /gluster_bricks/data
# If you have entries in /etc/hosts or in the DNS , you can swap the IP with 
itgluster volume add-brick VOLUMENAME replica 3 
172.16.20.X:/gluster_bricks/data/datagluster volume heal VOLUMENAME full
#Wait untill the volume heals and repeat with the other 2 bricks.
Of course, if it's a brand new setup -> it's easier to wipe the disks and then 
reinstall the nodes to start fresh .
Best Regards,Strahil Nikolov  
 
  On Fri, Aug 5, 2022 at 18:56, r greg wrote:   hi all,

*** new to oVirt and still learning ***

Sorry for the long thread...

I have a 3x node hyperconverged setup on v4.5.1. 

4x 1G NICS

NIC0 
> ovirtmgmt (Hosted-Engine VM)
> vmnetwork vlan102 (all VMs are placed on this network)
NIC1
> migration
NIC2 - NIC3 > bond0
> storage

Logical Networks:
ovirtmgmt - role: VM network | management | display | default route
vmnetwork - role: VM network
migrate - role: migration network
storage - role: gluster network

During deployment, I overlooked a setting and on node2 the host was deployed 
with Name: node2.mydomain.lab --- Hostname/IP: 172.16.20.X/24 (WebUI > Compute 
> Hosts)

I suspect because of this I see the following entries on 
/var/log/ovirt-engine/engine.log (only for node2)

2022-08-04 12:00:15,460Z WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-16) [] 
Could not associate brick 'node2.mydomain.lab:/gluster_bricks/vmstore/vmstore' 
of volume '1ca6a01a-9230-4bb1-844e-8064f3eadb53' with correct network as no 
gluster network found in cluster '1770ade4-0f6f-11ed-b8f6-00163e6faae8'

Is this something I need to be worried about or correct somehow?

>From node1

gluster> peer status
Number of Peers: 2

Hostname: node2.mydomain.lab
Uuid: a4468bb0-a3b3-42bc-9070-769da5a13427
State: Peer in Cluster (Connected)
Other names:
172.16.20.X

Hostname: node3.mydomain.lab
Uuid: 2b1273a4-667e-4925-af5e-00904988595a
State: Peer in Cluster (Connected)
Other names:
172.16.20.Z


volume status (same output Online Y --- for volumes vmstore and engine )
Status of volume: data
Gluster process                            TCP Port  RDMA Port  Online  Pid
--
Brick node1.mydomain.lab:/gluster_brick
s/data/data                                58734    0          Y      31586
Brick node2.mydomain.lab:/gluster_brick
s/data/data                                55148    0          Y      4317 
Brick node3.mydomain.lab:/gluster_brick
s/data/data                                57021    0          Y      5242 
Self-heal Daemon on localhost              N/A      N/A        Y      63170
Self-heal Daemon on node2.mydomain.lab  N/A      N/A        Y      4365 
Self-heal Daemon on node3.mydomain.lab  N/A      N/A        Y      5385
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z5FXYQR5FDMICJTHP7FQ5X4MO4VNND4A/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PEWOZ5GSF45EHYVW6D5C3J26X5J5GBF3/