[ovirt-users] Re: Populating ISO storage domain

2020-09-26 Thread matthew.st...@fujitsu.com
I setup the folder and the files with 36:36 with a mode of 775 on the directory and 644 on the files. Hours later, still not being read. From: Edward Berger Sent: Saturday, September 26, 2020 2:01 PM To: Stier, Matthew Cc: users@ovirt.org Subject: Re: [ovirt-users] Populating ISO storage

[ovirt-users] Re: Populating ISO storage domain

2020-09-26 Thread matthew.st...@fujitsu.com
It is being mounted under /rhev/data-center/mnt/ on the hosts, but is not being self-hosted engine. From: matthew.st...@fujitsu.com Sent: Saturday, September 26, 2020 4:17 PM To: Edward Berger Cc: users@ovirt.org Subject: [ovirt-users] Re: Populating ISO storage domain I setup the folder and

[ovirt-users] Re: Node 4.4.1 gluster bricks

2020-09-26 Thread Strahil Nikolov via Users
Since oVirt 4.4 , the stage that deploys the oVirt node/host is adding an lvm filter in /etc/lvm/lvm.conf which is the reason behind that. Best Regards, Strahil Nikolov В петък, 25 септември 2020 г., 20:52:13 Гринуич+3, Staniforth, Paul написа: Thanks,              the gluster

[ovirt-users] Re: Recreating ISO storage domain

2020-09-26 Thread matthew.st...@fujitsu.com
Got this fixed. Ignore. From: matthew.st...@fujitsu.com Sent: Saturday, September 26, 2020 1:39 PM To: users@ovirt.org Subject: [ovirt-users] Recreating ISO storage domain I have created a three host oVirt cluster using 4.4.2. I created an ISO storage domain to hold my collection of ISO

[ovirt-users] Populating ISO storage domain

2020-09-26 Thread matthew.st...@fujitsu.com
I've created and ISO storage domain, and placed ISO's in the export path do not show up under Storage > Storage Domains > iso > images; nor as available images which creating a new VM. Haven't located method get them noticed. There is a greyed out 'scan disk' option. What is the proper

[ovirt-users] Re: Populating ISO storage domain

2020-09-26 Thread Edward Berger
If its in an NFS folder, make sure the ownership is vdsm:kvm (36:36) On Sat, Sep 26, 2020 at 2:57 PM matthew.st...@fujitsu.com < matthew.st...@fujitsu.com> wrote: > I’ve created and ISO storage domain, and placed ISO’s in the export path > do not show up under Storage > Storage Domains > iso >

[ovirt-users] Re: Node upgrade to 4.4

2020-09-26 Thread thomas
I was looking forward to that presentation for exactly that reason: But it completely bypassed the HCI scenario, was very light on details and of course assumed that everything would just work, because there is no easy fail-back and you're probably better off taking down the complete farm

[ovirt-users] Single Node HCI upgrade procedure from CentOS7/oVirt 4.3 to CentOS8/oVirt 4.4?

2020-09-26 Thread thomas
I can hear you saying: "You did understand that single node HCI is just a toy, right?" For me the primary use of a single node HCI is adding some disaster resilience in small server edge type scenarios, where a three node HCI provides the fault tolerance: 3+1 with a bit of distance, warm or

[ovirt-users] Re: oVirt Change hosts to FQDN

2020-09-26 Thread Jeremey Wise
Another note of color to this. I can't repair a brick as in gluster it calls bricks by hostname.. and oVirt-engine now thinks of it by IP. Error while executing action Start Gluster Volume Reset Brick: Volume reset brick start failed: rc=-1 out=() err=['Pre Validation failed on

[ovirt-users] Re: oVirt - Engine - VM Reconstitute

2020-09-26 Thread Strahil Nikolov via Users
Importing is done from UI (Admin portal) -> Storage -> Domains -> Newly Added domain -> "Import VM" -> select Vm and you can import. Keep in mind that it is easier to import if all VM disks are on the same storage domain (I've opened a RFE for multi-domain import). Best Regards, Strahil

[ovirt-users] oVirt Change hosts to FQDN

2020-09-26 Thread Jeremey Wise
I posted that I had wiped out the oVirt-engine.. running cleanup on all three nodes. Done a re-deployment. Then to add nodes back.. though all have entries for eachother in /etc/hosts and ssh works fine via short and long name. I added nodes back into cluster.. but had to do it via IP to get