[ovirt-users] Re: hosted-engine deploy - Network error during communication with the Host. (NFS)

2021-11-24 Thread Matyi Szabolcs
*NFS sharing is fine I can mount from VM and also from localhost..*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YN2JYW3VZOAH2PYZR2ODQO5SPJLJXX6D/


[ovirt-users] Re: Creating VMs from templates with their own disks

2021-11-24 Thread Staniforth, Paul
Hello Sina,

  to use the template.

Create new VM and select the template.

Click the advanced options.

In the boot section you can set the HD to be the first boot devices and 
unselect any CD.
In the Resource Allocation section, you can set the storage allocation to clone 
instead of thin ( this will create a copy of the template disk), you can also 
set the disk alias name.

you can also edit the template to make sure the HD is the first boot device.

I don't know what's in your ansible yaml definition.

Cloning is for copying VMs or snapshots not templates.

Regards,

Paul S.

From: Sina Owolabi 
Sent: 24 November 2021 09:28
To: Staniforth, Paul 
Cc: users@ovirt.org 
Subject: Re: [ovirt-users] Re: Creating VMs from templates with their own disks


Caution External Mail: Do not click any links or open any attachments unless 
you trust the sender and know that the content is safe.

Hello

Sorry for the late reply, work has been crazy.

This doesnt seem to work as advertised, or I am still not getting it.
Either way I would really appreciate some help and guidance.

Steps I have attempted:
1. Create and configure VM as I want it to be (disk, partitioning, etc).
2. Shutdown the vm, create a template from it.

Cloning manually:
Cloning fails with this message:
Error while executing action:
clone00.domain.tld:

  *   Cannot add VM. One or more provided storage domains are either not in 
active status or of an illegal type for the requested operation.

I cant modify the storage allocation, and the disk its attempting to use is the 
disk of the source VM.

Manual template install:
Choosing to install manually with a template requires me to add a new disk, and 
to boot off the CD (defined in the template) and manually set things up. This I 
do not wish to do, because I would rather automate.

Cloning with ansible, defining the cloud-init script and using the template:
VM is successfully created, but logging in with remote-viewer drops me into the 
installation process (setting up from the attached ISO). Which is also not 
desired.

Please help me with what I am doing wrong.
Again the goal is to have the vm setup with its own credentials.


On Thu, Nov 18, 2021 at 9:24 AM Staniforth, Paul 
mailto:p.stanifo...@leedsbeckett.ac.uk>> wrote:
Hello,
 The VMs can get created from a template otherwise the blank 
template is used if a particular template is used it can be thin dependent VM 
the VMs disks is linked to the Templates disk and it just carries the changes 
made in the VMs disk (this is quicker and uses less space if you a lot of 
disks). The other option is to create a cloned VM and this will copy the 
Templates disk to the VM so it's no longer dependent.

In the ansible documentation look for the clone option.
https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_vm_module.html

https://www.ovirt.org/documentation/virtual_machine_management_guide/index.html
Virtual Machine Management 
Guide
oVirt is a free open-source virtualization solution for your entire enterprise
www.ovirt.org
For the cloud-init does the cloud-init package need to be installed on the 
template image?


Regards,

Paul S.

From: 

[ovirt-users] Re: How to debug "Non Operational" host

2021-11-24 Thread Gervais de Montbrun
Hi Paul.

I've updated the /etc/hosts file on the engine and restarted the hosted-engine. 
It seems that the "Could not associate brick" errors have stopped. Thank you!
No change in my issue though ☹️.

It looks like Glusterd is starting up OK. I did force start the bricks on 
ovirt1. 
[r...@ovirt1.dgi ~]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
preset: disabled)
  Drop-In: /etc/systemd/system/glusterd.service.d
   └─99-cpu.conf
   Active: active (running) since Wed 2021-11-24 16:19:25 UTC; 1h 42min ago
 Docs: man:glusterd(8)
  Process: 2321 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
--log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 2354 (glusterd)
Tasks: 92 (limit: 1648201)
   Memory: 63.7G
  CPU: 1h 17min 42.666s
   CGroup: /glusterfs.slice/glusterd.service
   ├─2354 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
   ├─3247 /usr/sbin/glusterfsd -s ovirt1-storage.dgi --volfile-id 
vmstore.ovirt1-storage.dgi.gluster_bricks-vmstore-vmstore -p 
/var/run/gluster/vols/vmstore/ovirt1-storage.dgi-gluster_bricks-vmstore-vmstore.pid
 -S /var/run/gluster/fb93591764c8.socket --brick-name 
/gluster_bricks/vmstore/vmstore -l /var/log/glusterfs/bricks/gluster_b>
   ├─3275 /usr/sbin/glusterfsd -s ovirt1-storage.dgi --volfile-id 
engine.ovirt1-storage.dgi.gluster_bricks-engine-engine -p 
/var/run/gluster/vols/engine/ovirt1-storage.dgi-gluster_bricks-engine-engine.pid
 -S /var/run/gluster/66ebd47080b528d1.socket --brick-name 
/gluster_bricks/engine/engine -l /var/log/glusterfs/bricks/gluster_bricks-en>
   └─3287 /usr/sbin/glusterfs -s localhost --volfile-id shd/engine -p 
/var/run/gluster/shd/engine/engine-shd.pid -l /var/log/glusterfs/glustershd.log 
-S /var/run/gluster/c9b8692f3e532562.socket --xlator-option 
*replicate*.node-uuid=fdf2cf13-c2c5-4afa-8d73-76c50c69122a --process-name 
glustershd --client-pid=-6

Nov 24 16:19:22 ovirt1.dgi systemd[1]: Starting GlusterFS, a clustered 
file-system server...
Nov 24 16:19:25 ovirt1.dgi systemd[1]: Started GlusterFS, a clustered 
file-system server.
Nov 24 16:19:28 ovirt1.dgi glusterd[2354]: [2021-11-24 16:19:28.909836] C 
[MSGID: 106002] [glusterd-server-quorum.c:355:glusterd_do_volume_quorum_action] 
0-management: Server quorum lost for volume engine. Stopping local bricks.
Nov 24 16:19:28 ovirt1.dgi glusterd[2354]: [2021-11-24 16:19:28.910745] C 
[MSGID: 106002] [glusterd-server-quorum.c:355:glusterd_do_volume_quorum_action] 
0-management: Server quorum lost for volume vmstore. Stopping local bricks.
Nov 24 16:19:31 ovirt1.dgi glusterd[2354]: [2021-11-24 16:19:31.925206] C 
[MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 
0-management: Server quorum regained for volume engine. Starting local bricks.
Nov 24 16:19:31 ovirt1.dgi glusterd[2354]: [2021-11-24 16:19:31.938507] C 
[MSGID: 106003] [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 
0-management: Server quorum regained for volume vmstore. Starting local bricks.

The bricks are showing all green at least in the UI, but they never seem to 
catch up and not show any unsynced entries:


As for mounting the bricks, they are mounting based on what is in /etc/fstab.
[r...@ovirt1.dgi ~]# cat /etc/fstab 
#
# /etc/fstab
# Created by anaconda on Wed Feb 17 20:17:28 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/onn/ovirt-node-ng-4.4.9-0.20211026.0+1 / xfs defaults,discard 0 0
UUID=07e9dfea-9710-483d-bc63-aea7dbb5801d /boot   xfs 
defaults0 0
/dev/mapper/onn-home /home xfs defaults,discard 0 0
/dev/mapper/onn-tmp /tmp xfs defaults,discard 0 0
/dev/mapper/onn-var /var xfs defaults,discard 0 0
/dev/mapper/onn-var_log /var/log xfs defaults,discard 0 0
/dev/mapper/onn-var_log_audit /var/log/audit xfs defaults,discard 0 0
/dev/mapper/onn-swapnoneswapdefaults0 0
UUID=4e2c88e4-2bae-4b41-bb62-631820435845 /gluster_bricks/engine xfs 
inode64,noatime,nodiratime 0 0
UUID=ad938b6e-44d2-492a-a313-c4d0c0608e09 /gluster_bricks/vmstore xfs 
inode64,noatime,nodiratime 0 0

[r...@ovirt1.dgi ~]# mount |grep gluster_bricks
/dev/mapper/gluster_vg_sdb-gluster_lv_engine on /gluster_bricks/engine type xfs 
(rw,noatime,nodiratime,seclabel,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=3072,noquota)
/dev/mapper/gluster_vg_sdb-gluster_lv_vmstore on /gluster_bricks/vmstore type 
xfs 
(rw,noatime,nodiratime,seclabel,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=3072,noquota)

What does this "extra" mount?
[r...@ovirt1.dgi ~]# mount | grep storage

[ovirt-users] Re: Creating VMs from templates with their own disks

2021-11-24 Thread Sina Owolabi
Hello

Sorry for the late reply, work has been crazy.

This doesnt seem to work as advertised, or I am still not getting it.
Either way I would really appreciate some help and guidance.

Steps I have attempted:
1. Create and configure VM as I want it to be (disk, partitioning, etc).
2. Shutdown the vm, create a template from it.

Cloning manually:
Cloning fails with this message:
Error while executing action:
clone00.domain.tld:

   - Cannot add VM. One or more provided storage domains are either not in
   active status or of an illegal type for the requested operation.

I cant modify the storage allocation, and the disk its attempting to use is
the disk of the source VM.

Manual template install:
Choosing to install manually with a template requires me to add a new disk,
and to boot off the CD (defined in the template) and manually set things
up. This I do not wish to do, because I would rather automate.

Cloning with ansible, defining the cloud-init script and using the template:
VM is successfully created, but logging in with remote-viewer drops me into
the installation process (setting up from the attached ISO). Which is also
not desired.

Please help me with what I am doing wrong.
Again the goal is to have the vm setup with its own credentials.


On Thu, Nov 18, 2021 at 9:24 AM Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> wrote:

> Hello,
>  The VMs can get created from a template otherwise the blank
> template is used if a particular template is used it can be thin dependent
> VM the VMs disks is linked to the Templates disk and it just carries the
> changes made in the VMs disk (this is quicker and uses less space if you a
> lot of disks). The other option is to create a cloned VM and this will copy
> the Templates disk to the VM so it's no longer dependent.
>
> In the ansible documentation look for the clone option.
>
> https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_vm_module.html
>
>
> https://www.ovirt.org/documentation/virtual_machine_management_guide/index.html
> Virtual Machine Management Guide
> 
> oVirt is a free open-source virtualization solution for your entire
> enterprise
> www.ovirt.org
> For the cloud-init does the cloud-init package need to be installed on the
> template image?
>
>
> Regards,
>
> Paul S.
> --
> *From:* notify.s...@gmail.com 
> *Sent:* 18 November 2021 07:34
> *To:* users@ovirt.org 
> *Subject:* [ovirt-users] Re: Creating VMs from templates with their own
> disks
>
> Caution External Mail: Do not click any links or open any attachments
> unless you trust the sender and know that the content is safe.
>
> Im sorry, I am trying to wrap my head around this but it is difficult.
>
> I just want to be able to stand up new vms, with their own storage,
> similar to how I can with plain vanilla KVM, with a template or without,
> maybe even with a kickstart, and hopefully with ansible.
>
> Right now anytime I try to create a VM, using the template, (with
> ansible), it gets the template disk attached, and from the console I see
> the new vm is named as the vm I created the template with. Cloud init
> script that is meant to rename the vm, and join it to IPA, is ignored.
>
> If I create storage for the vm, before creating it, both the template
> storage and the new storage are attached to the vm, which is also
> confusing. Cloud init is also ignored.
>
> I didn't think something this straightforward would end up needing a shift
> in thinking about how vms are created, especially with a product that's
> more than likely using kvm under the hood.
>
> I would appreciate some straightforward guiding steps, if I can get them.
> Really. It's been a frustrating week.
>
>
> > On 2021-11-17 13:50, Sina Owolabi wrote:
> >
> >
> > You can create a template with no disk, then VM's created from that
> > template will also have no disk. Then add a new disk to the VM after you
> > create it. This is how the default blank template works. You can also
> > create a template with an empty disk, then every VM created will also
> > get an empty disk by default. You can always rename disks as well.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement:
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fprivacy-policy.htmldata=04%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Ce92dfae2d8d64a8a5d2308d9aa660b23%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637728177667805891%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=wIZvVNcinh35Ufj0jodhjIVWj2LtY%2FYgP77rUh0%2BCLs%3Dreserved=0
> oVirt Code of Conduct:
> 

[ovirt-users] what is the best practices to delete vdisks in 100% used gluster stoargedoamin

2021-11-24 Thread dhanaraj.ramesh--- via Users
Hi Team

In my one of three node Hyperconverged (gluster + ovirt) setup, one of the 
storage domain went offline from Datacenter due to 100% storage space 
utilization, but luckily it is still online in Gluster. how should I properly 
delete some of the vdisks in that offline stoarge domain from Gluster and bring 
back to online?   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ROH76BP2WVIPCKM5TY2MWSVPG2TCENQI/


[ovirt-users] Re: How to debug "Non Operational" host

2021-11-24 Thread Staniforth, Paul

Hi Gervais,

  The engine doesn't need to be able to ping the IP address, just 
needs to know what it is so adding them to the /etc/hosts file should work.

Also, I would check ovirt1, is it mounting the brick, what does "systemctl 
status glusterd" show, what are the logs in /var/log/gluster ?


Regards,

Paul S.

From: Gervais de Montbrun 
Sent: 24 November 2021 01:16
To: Staniforth, Paul 
Cc: Vojtech Juranek ; users@ovirt.org 
Subject: Re: [ovirt-users] How to debug "Non Operational" host


Caution External Mail: Do not click any links or open any attachments unless 
you trust the sender and know that the content is safe.

Hi Paul,

I don't quite get what you mean by this:

assuming you have a storage network for the gluster nodes the engine needs to 
resolve be able to resolve the host addresses

The storage network is on 10GB network cards and plugged into a stand-alone 
switch. The hosted-engine is not on the same network at all and can not ping 
the IP's associated with those cards. Are you saying that it needs access to 
that network, or that is needs to be able to resolve the IP's. I can add them 
to the /etc/hosts file on the ovirt-engine or do I need to reconfigure my 
setup? It was working as it currently configured before applying the update.

I have no idea why the ovirt1 server is not showing up with the fqdn. I set up 
all the servers the same way. It's been like that since I set things up. I have 
looked for where this might be corrected, but can't find it. Ideas?

The yellow bricks... I can force start them (and I have in the past), but now 
it turns green for a few minutes and then returns to red.

Cheers,
Gervais



On Nov 23, 2021, at 12:57 PM, Staniforth, Paul 
mailto:p.stanifo...@leedsbeckett.ac.uk>> wrote:

Hello Gervais,

   is the brick mounted on ovirt1 ?  can you mount it using the 
settings in /etc/fstab ?

The hostname is not using a FQDN for ovirt1

assuming you have a storage network for the gluster nodes the engine needs to 
resolve be able to resolve the host addresses

ovirt1-storage.dgi
ovirt2-storage.dgi
ovirt3-storage.dgi

So that it can assign them to the correct network.

When the volume is showing yellow you can force restart them again from the GUI.

Regards,

Paul S.


From: Gervais de Montbrun 
mailto:gerv...@demontbrun.com>>
Sent: 23 November 2021 13:42
To: Vojtech Juranek mailto:vjura...@redhat.com>>
Cc: users@ovirt.org 
mailto:users@ovirt.org>>
Subject: [ovirt-users] Re: How to debug "Non Operational" host

Caution External Mail: Do not click any links or open any attachments unless 
you trust the sender and know that the content is safe.

Hi Vojta,

Thanks for the help.

I tried to activate my server this morning and captured the logs from vdsm.log 
and engine.log. They are attached.

Something went awry with my gluster (I think) as it is showing that the bricks 
on the affected server (ovirt1) are not mounted:







The networking looks fine.

Cheers,
Gervais



> On Nov 23, 2021, at 3:37 AM, Vojtech Juranek 
> mailto:vjura...@redhat.com>> wrote:
>
> On Tuesday, 23 November 2021 03:36:07 CET Gervais de Montbrun wrote:
>> Hi Folks,
>>
>> I did a minor upgrade on the first host in my cluster and now it is
>> reporting "Non Operational"
>>
>> This is what yum showed as updatable. However, I did the update through the
>> ovirt-engine web interface.
>>
>> ovirt-node-ng-image-update.noarch
>>   4.4.9-1.el8
>>ovirt-4.4 Obsoleting Packages
>> ovirt-node-ng-image-update.noarch
>>   4.4.9-1.el8
>>ovirt-4.4 ovirt-node-ng-image-update.noarch
>>  4.4.8.3-1.el8
>>   @System ovirt-node-ng-image-update.noarch
>>   4.4.9-1.el8
>>ovirt-4.4
>> ovirt-node-ng-image-update-placeholder.noarch
>>4.4.8.3-1.el8
>> @System
>>
>> How do I start to debug this issue?
>
> Check engine log in /var/log/ovirt-engine/engine.log on the machine where
> engine runs
>
>>
>>
>> Also, it looks like the vmstore brick is not mounting on that host. I only
>> see the engine mounted.
>
>
> Could you also attach relevant part of vdsm log (/var/log/vdsm/vdsm.log) from
> the machine where mount failed? You should see some mount related error there.
> This could be also a reason why hosts become non-operational.
>
> Thanks
> Vojta
>
>> Broken server:
>> r...@ovirt1.dgi log]# mount | grep storage
>> ovirt1-storage.dgi:/engine on
>> /rhev/data-center/mnt/glusterSD/ovirt1-storage.dgi:_engine type
>> fuse.glusterfs
>> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=
>> 131072) Working server:
>> [r...@ovirt2.dgi ~]# mount | grep storage
>>