Re: [ovirt-users] 'Sanlock lockspace add failure' when trying to set up hosted engine 4.2 on existing gluster cluster

2018-03-09 Thread Oliver Dietzel
Thanks a lot, we already did this a couple of minutes ago, worked!

-Ursprüngliche Nachricht-
Von: Martin Sivak [mailto:msi...@redhat.com] 
Gesendet: Freitag, 9. März 2018 11:13
An: Oliver Dietzel <o.diet...@rto.de>
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] 'Sanlock lockspace add failure' when trying to set 
up hosted engine 4.2 on existing gluster cluster

Hi Oliver,

which version of oVirt are you running?

The issue seems to be that correctly deployed hosted engine does not have any 
storage available in the webadmin If I understand you correctly. Is that right?

We used to require two separate storage domains in 4.1 and older releases. One 
for hosted engine and one for the rest of the VMs. 4.2.1 changed that. So if 
you are running 4.1, just add another storage domain [1] and the engine will 
finish the hosted engine initialization automatically after that.

[1] 
https://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/

See the paragraph: "Important: Log in as the admin@internal user to continue 
configuring the Engine and add further resources. You must create another data 
domain for the data center to be initialized to host regular virtual machine 
data, and for the Engine virtual machine to be visible. See "Storage" in the 
Administration Guide for different storage options and on how to add a data 
storage domain."

Best regards

Martin Sivak

On Fri, Mar 9, 2018 at 10:08 AM, Oliver Dietzel <o.diet...@rto.de> wrote:
> Install from node iso on gluster works fine, the hosted engine vm 
> installs on gluster, but after installation is finished and rebooted 
> the gluster cluster is not added as data storage.
> Hosted engine is able to boot from gluster but not able to use it. 
> Looks like HE doesnt use the gluster volume it booted from, but tries 
> to add the same gluster volume a second time.
>
> Is there a workaround?
>
> Error message in web gui:
> VDSM ovirt-gluster.rto.de command CreateStoragePoolVDS failed: Cannot 
> acquire host id: (u'e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb',
> SanlockException(22, 'Sanlock lockspace add failure', 'Invalid 
> argument'))
>
> Error message in sanlock.log:
>
> [root@ovirt-gluster ~]# tail /var/log/sanlock.log
> 2018-03-09 09:37:05 812 [1082]: s5 host 1 2 791 
> f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus
> 2018-03-09 09:37:05 812 [1082]: s5 host 250 1 0 
> f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus
> 2018-03-09 09:37:20 828 [1093]: s5:r4 resource
> e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:e555168d-719d-4f73-8541-62395c97c
> 1ff:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-
> 9f6a-6ab7c8723eeb/images/7b6675f7-f739-40b8-a554-b19795fe57c0/e555168d
> -719d-4f73-8541-62395c97c1ff.lease:0
> for 3,12,5566
> 2018-03-09 09:40:58 1046 [1093]: s6 lockspace
> hosted-engine:1:/var/run/vdsm/storage/e6d008f7-e5e8-4064-9f6a-6ab7c872
> 3eeb/3b22b2fe-f4d5-4d0c-995a-03b2850b674b/eb7218ff-bdbc-49c5-af17-a62a
> 7385d299:0
> 2018-03-09 09:41:20 1067 [1082]: s6 host 1 1 1046 
> f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus
> 2018-03-09 09:42:23 1130 [1093]: s5:r5 resource
> e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:e555168d-719d-4f73-8541-62395c97c
> 1ff:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-
> 9f6a-6ab7c8723eeb/images/7b6675f7-f739-40b8-a554-b19795fe57c0/e555168d
> -719d-4f73-8541-62395c97c1ff.lease:0
> for 2,9,11635
> 2018-03-09 09:44:30 1258 [1093]: add_lockspace
> e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:250:/rhev/data-center/mnt/gluster
> SD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0
> conflicts with name of list1 s5
> e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:1:/rhev/data-center/mnt/glusterSD
> /gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0
> 2018-03-09 09:44:37 1264 [1093]: s7 lockspace
> hosted-engine:1:/var/run/vdsm/storage/e6d008f7-e5e8-4064-9f6a-6ab7c872
> 3eeb/3b22b2fe-f4d5-4d0c-995a-03b2850b674b/eb7218ff-bdbc-49c5-af17-a62a
> 7385d299:0
> 2018-03-09 09:44:58 1285 [1082]: s7 host 1 2 1264 
> f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus
> 2018-03-09 09:46:42 1389 [1093]: add_lockspace
> e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:250:/rhev/data-center/mnt/gluster
> SD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0
> conflicts with name of list1 s5
> e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:1:/rhev/data-center/mnt/glusterSD
> /gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0
> ___
> Oliver Dietzel
> RTO GmbH
> Hanauer Landstraße 439
> 60314 Frankfurt
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 'Sanlock lockspace add failure' when trying to set up hosted engine 4.2 on existing gluster cluster

2018-03-09 Thread Oliver Dietzel
Install from node iso on gluster works fine, the hosted engine vm installs on 
gluster, but after installation is finished and rebooted the gluster cluster is 
not added as data storage.
Hosted engine is able to boot from gluster but not able to use it. Looks like 
HE doesnt use the gluster volume it booted from, but tries to add the same 
gluster volume a second time.

Is there a workaround?

Error message in web gui:
VDSM ovirt-gluster.rto.de command CreateStoragePoolVDS failed: Cannot acquire 
host id: (u'e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb', SanlockException(22, 
'Sanlock lockspace add failure', 'Invalid argument'))

Error message in sanlock.log:

[root@ovirt-gluster ~]# tail /var/log/sanlock.log
2018-03-09 09:37:05 812 [1082]: s5 host 1 2 791 
f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus
2018-03-09 09:37:05 812 [1082]: s5 host 250 1 0 
f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus
2018-03-09 09:37:20 828 [1093]: s5:r4 resource 
e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:e555168d-719d-4f73-8541-62395c97c1ff:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/images/7b6675f7-f739-40b8-a554-b19795fe57c0/e555168d-719d-4f73-8541-62395c97c1ff.lease:0
 for 3,12,5566
2018-03-09 09:40:58 1046 [1093]: s6 lockspace 
hosted-engine:1:/var/run/vdsm/storage/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/3b22b2fe-f4d5-4d0c-995a-03b2850b674b/eb7218ff-bdbc-49c5-af17-a62a7385d299:0
2018-03-09 09:41:20 1067 [1082]: s6 host 1 1 1046 
f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus
2018-03-09 09:42:23 1130 [1093]: s5:r5 resource 
e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:e555168d-719d-4f73-8541-62395c97c1ff:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/images/7b6675f7-f739-40b8-a554-b19795fe57c0/e555168d-719d-4f73-8541-62395c97c1ff.lease:0
 for 2,9,11635
2018-03-09 09:44:30 1258 [1093]: add_lockspace 
e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:250:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0
 conflicts with name of list1 s5 
e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:1:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0
2018-03-09 09:44:37 1264 [1093]: s7 lockspace 
hosted-engine:1:/var/run/vdsm/storage/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/3b22b2fe-f4d5-4d0c-995a-03b2850b674b/eb7218ff-bdbc-49c5-af17-a62a7385d299:0
2018-03-09 09:44:58 1285 [1082]: s7 host 1 2 1264 
f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus
2018-03-09 09:46:42 1389 [1093]: add_lockspace 
e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:250:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0
 conflicts with name of list1 s5 
e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:1:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0
___
Oliver Dietzel

RTO GmbH
Hanauer Landstraße 439
60314 Frankfurt

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Setup ovirt-guest-agent from tarball possible?

2018-02-19 Thread Oliver Dietzel
Hi, i try to install ovirt-guest-agent on a Clearlinux vm (already up and 
running in our ovirt test cluster). 
The usual fedora / el7 rpm's do not work.

Is it possible to install ovirt-guest-agent from a tarball? Or do i have to 
rebuild a src rpm?

And where do i find the latest tarball and src rpm of this package / these 
packages?

Any help appreciated, thx in advance
Oli
___
Oliver Dietzel
RTO GmbH
Hanauer Landstraße 439
60314 Frankfurt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Installation on Centos 7.3 1704 is broken

2017-05-08 Thread Oliver Dietzel
Thx a lot!

Von: Eduardo Mayoral [mailto:emayo...@arsys.es]
Gesendet: Montag, 8. Mai 2017 12:22
An: Oliver Dietzel <o.diet...@rto.de>; 'users@ovirt.org' <users@ovirt.org>
Betreff: Re: [ovirt-users] Ovirt Installation on Centos 7.3 1704 is broken


Quoted from https://www.ovirt.org/release/4.1.1/#fedora--centos--rhel



EPEL

TL;DR Don't enable all of EPEL on oVirt machines.

The ovirt-release package enables the epel repositories and includes several 
specific packages that are required from there. It also enables and uses the 
CentOS OpsTools SIG repos, for other packages.

EPEL currently includes collectd 5.7.1, and the collectd package there includes 
the write_ plugin.

OpsTools currently includes collectd 5.7.0, and the write_ plugin is packaged 
separately.

ovirt-release does not use collectd from epel, so if you only use it, you 
should be ok.

If you want to use other packages from EPEL, you should make sure to not 
include collectd. Either use includepkgs and add those you need, or use 
excludepkgs=collectd*.

The correct directive is "exclude=collectd*" , not "excludepkgs=collectd*" , 
but other than that, this looks like what you are experiencing.



Eduardo Mayoral Jimeno (emayo...@arsys.es<mailto:emayo...@arsys.es>)

Administrador de sistemas. Departamento de Plataformas. Arsys internet.

+34 941 620 145 ext. 5153
On 08/05/17 12:14, Oliver Dietzel wrote:

Collectd-disk and -write_http require an older version of collectd than offered 
by the repos (7.2.0-2 instead of 7.2.1-2 as offered by repo).



Will there be updated versions of collectd-disk and -write_http?



To reproduce try # yum -y install ovirt-engine



on a fully updated Centos 7.3 1704

https://buildlogs.centos.org/rolling/7/isos/x86_64/CentOS-7-x86_64-Minimal-1704-01.iso





___________

Oliver Dietzel





___

Users mailing list

Users@ovirt.org<mailto:Users@ovirt.org>

http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt Installation on Centos 7.3 1704 is broken

2017-05-08 Thread Oliver Dietzel
Collectd-disk and -write_http require an older version of collectd than offered 
by the repos (7.2.0-2 instead of 7.2.1-2 as offered by repo).

Will there be updated versions of collectd-disk and -write_http?

To reproduce try # yum -y install ovirt-engine

on a fully updated Centos 7.3 1704
https://buildlogs.centos.org/rolling/7/isos/x86_64/CentOS-7-x86_64-Minimal-1704-01.iso


___
Oliver Dietzel
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails

2017-05-03 Thread Oliver Dietzel
Worked as described in the blog entry:

"The installer will ask if you want to configure your host and cluster for 
Gluster. Again, click "Next" to proceed. In some of my tests, the installer 
failed at this point, with an error message of Failed to execute stage 
'Environment customization'. When I encountered this, I clicked "Restart 
Setup", repeated the above steps, and was able to proceed normally."

https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/

After a setup restart this error went away.

The last error i had was in the last stage of the setup process.

The installer was unable to connect to the engine vm after creation and timed 
out after about the 10th retry.
Postinstall failed, but the vm itself was up and running and i was able to ssh 
to it and to connect to the web ui.

Log attached.

Thx Oli

-Ursprüngliche Nachricht-
Von: knarra [mailto:kna...@redhat.com] 
Gesendet: Mittwoch, 3. Mai 2017 13:49
An: Oliver Dietzel <o.diet...@rto.de>; 'users@ovirt.org' <users@ovirt.org>
Betreff: Re: AW: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on 
gluster, existing disk sdb not found or filtered, deployment fails

On 05/03/2017 03:53 PM, Oliver Dietzel wrote:
> Actual size as displayed by lsblk is 558,9G , a combined size of 530 
> GB worked (engine 100, data 180, vmstore 250), but only without thin 
> provisioning. Deployment failed with thin provisioning enabled, but worked 
> with fixed sizes.
>
> Now i hang in hosted engine deployment (having set installation with gluster 
> to yes when asked) with error:
>
> "Failed to execute stage 'Environment customization': Invalid value provided 
> to 'ENABLE_HC_GLUSTER_SERVICE'"
Hi,

 Can you provide me the exact question and your response to that because of 
which your setup failed ?

Thanks
kasturi
>
>
>
> -Ursprüngliche Nachricht-
> Von: knarra [mailto:kna...@redhat.com]
> Gesendet: Mittwoch, 3. Mai 2017 12:16
> An: Oliver Dietzel <o.diet...@rto.de>; 'users@ovirt.org' 
> <users@ovirt.org>
> Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on 
> gluster, existing disk sdb not found or filtered, deployment fails
>
> On 05/03/2017 03:20 PM, Oliver Dietzel wrote:
>> Thx a lot, i already got rid of the multipaths.
>>
>> Now 5 tries later i try to understand who disk space calc works.
>>
>> I already understand that the combined GByte limit for my drive sdb is 
>> around 530.
>>
>>> sdb8:16   0 558,9G  
>>> 0 disk
>> Now the thin pool creation kicks me! :)
>>
>> (i do a  vgremove gluster_vg_sdb on all hosts and reboot all three 
>> hosts between retries)
>>
>> TASK [Create LVs with specified size for the VGs]
>> **
>> failed: [hv1.iw.rto.de] (item={u'lv': u'gluster_thinpool_sdb',
>> u'size': u'530GB', u'extent': u'100%FREE', u'vg': u'gluster_vg_sdb'}) 
>> => {"failed": true, "item": {"extent": "100%FREE", "lv":
>> "gluster_thinpool_sdb", "size": "530GB", "vg": "gluster_vg_sdb"},
>> "msg": "  Insufficient suitable allocatable extents for logical 
>> volume
>> gluster_thinpool_sdb: 135680 more required\n", "rc": 5}
> I think you should input the size as 500GB if your actual disk size is 530 ?
>> -Ursprüngliche Nachricht-
>> Von: knarra [mailto:kna...@redhat.com]
>> Gesendet: Mittwoch, 3. Mai 2017 11:17
>> An: Oliver Dietzel <o.diet...@rto.de>; 'users@ovirt.org'
>> <users@ovirt.org>
>> Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on 
>> gluster, existing disk sdb not found or filtered, deployment fails
>>
>> On 05/03/2017 02:06 PM, Oliver Dietzel wrote:
>>> Hi,
>>>
>>> i try to set up a 3 node gluster based ovirt cluster, following this guide:
>>> https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and
>>> -
>>> g
>>> luster-storage/
>>>
>>> oVirt nodes were installed with all disks available in the system, 
>>> installer limited to use only /dev/sda (both sda and sdb are HPE 
>>> logical volumes on a p410 raid controller)
>>>
>>>
>>> Glusterfs deployment fails in the last step before engine setup:
>>>
>>> PLAY RECAP 
>>> *
>>> hv1.iw  : ok=1changed=1unreachable=0failed=0
>>> hv2.iw  : ok=1changed=1unreachabl

Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails

2017-05-03 Thread Oliver Dietzel
Actual size as displayed by lsblk is 558,9G , a combined size of 530 GB worked 
(engine 100, data 180, vmstore 250),
but only without thin provisioning. Deployment failed with thin provisioning 
enabled, but worked with fixed sizes.

Now i hang in hosted engine deployment (having set installation with gluster to 
yes when asked) with error:

"Failed to execute stage 'Environment customization': Invalid value provided to 
'ENABLE_HC_GLUSTER_SERVICE' "



-Ursprüngliche Nachricht-
Von: knarra [mailto:kna...@redhat.com] 
Gesendet: Mittwoch, 3. Mai 2017 12:16
An: Oliver Dietzel <o.diet...@rto.de>; 'users@ovirt.org' <users@ovirt.org>
Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, 
existing disk sdb not found or filtered, deployment fails

On 05/03/2017 03:20 PM, Oliver Dietzel wrote:
> Thx a lot, i already got rid of the multipaths.
>
> Now 5 tries later i try to understand who disk space calc works.
>
> I already understand that the combined GByte limit for my drive sdb is around 
> 530.
>   
>> sdb8:16   0 558,9G  
>> 0 disk
> Now the thin pool creation kicks me! :)
>
> (i do a  vgremove gluster_vg_sdb on all hosts and reboot all three 
> hosts between retries)
>
> TASK [Create LVs with specified size for the VGs] 
> **
> failed: [hv1.iw.rto.de] (item={u'lv': u'gluster_thinpool_sdb', 
> u'size': u'530GB', u'extent': u'100%FREE', u'vg': u'gluster_vg_sdb'}) 
> => {"failed": true, "item": {"extent": "100%FREE", "lv": 
> "gluster_thinpool_sdb", "size": "530GB", "vg": "gluster_vg_sdb"}, 
> "msg": "  Insufficient suitable allocatable extents for logical volume 
> gluster_thinpool_sdb: 135680 more required\n", "rc": 5}
I think you should input the size as 500GB if your actual disk size is 530 ?
>
> -Ursprüngliche Nachricht-
> Von: knarra [mailto:kna...@redhat.com]
> Gesendet: Mittwoch, 3. Mai 2017 11:17
> An: Oliver Dietzel <o.diet...@rto.de>; 'users@ovirt.org' 
> <users@ovirt.org>
> Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on 
> gluster, existing disk sdb not found or filtered, deployment fails
>
> On 05/03/2017 02:06 PM, Oliver Dietzel wrote:
>> Hi,
>>
>> i try to set up a 3 node gluster based ovirt cluster, following this guide:
>> https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-
>> g
>> luster-storage/
>>
>> oVirt nodes were installed with all disks available in the system, 
>> installer limited to use only /dev/sda (both sda and sdb are HPE 
>> logical volumes on a p410 raid controller)
>>
>>
>> Glusterfs deployment fails in the last step before engine setup:
>>
>> PLAY RECAP 
>> *
>> hv1.iw  : ok=1changed=1unreachable=0failed=0
>> hv2.iw  : ok=1changed=1unreachable=0failed=0
>> hv3.iw  : ok=1changed=1unreachable=0failed=0
>>
>>
>> PLAY [gluster_servers]
>> *
>>
>> TASK [Clean up filesystem signature]
>> ***
>> skipping: [hv1.iw] => (item=/dev/sdb)
>> skipping: [hv2.iw] => (item=/dev/sdb)
>> skipping: [hv3.iw] => (item=/dev/sdb)
>>
>> TASK [Create Physical Volume]
>> **
>> failed: [hv3.iw] (item=/dev/sdb) => {"failed": true,
>> "failed_when_result": true, "item": "/dev/sdb", "msg": "  Device 
>> /dev/sdb not found (or ignored by filtering).\n", "rc": 5}
>> failed: [hv1.iw] (item=/dev/sdb) => {"failed": true,
>> "failed_when_result": true, "item": "/dev/sdb", "msg": "  Device 
>> /dev/sdb not found (or ignored by filtering).\n", "rc": 5}
>> failed: [hv2.iw] (item=/dev/sdb) => {"failed": true,
>> "failed_when_result": true, "item": "/dev/sdb", "msg": "  Device 
>> /dev/sdb not found (or ignored by filtering).\n", "rc": 5}
>>
>>
>> But: /dev/sdb exists on all hosts
>>
>> [root@hv1 ~]# lsblk
>> NAME MAJ:MIN RM   SIZE 
>> RO TYPE  MOUNTPOINT
>> sda8:00 136,7G  
>> 0 d

[ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails

2017-05-03 Thread Oliver Dietzel
Hi,

i try to set up a 3 node gluster based ovirt cluster, following this guide:
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/

oVirt nodes were installed with all disks available in the system, installer 
limited to use only /dev/sda (both sda and sdb are HPE logical volumes on a 
p410 raid controller)


Glusterfs deployment fails in the last step before engine setup:

PLAY RECAP *
hv1.iw  : ok=1changed=1unreachable=0failed=0   
hv2.iw  : ok=1changed=1unreachable=0failed=0   
hv3.iw  : ok=1changed=1unreachable=0failed=0   


PLAY [gluster_servers] *

TASK [Clean up filesystem signature] ***
skipping: [hv1.iw] => (item=/dev/sdb) 
skipping: [hv2.iw] => (item=/dev/sdb) 
skipping: [hv3.iw] => (item=/dev/sdb) 

TASK [Create Physical Volume] **
failed: [hv3.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": 
true, "item": "/dev/sdb", "msg": "  Device /dev/sdb not found (or ignored by 
filtering).\n", "rc": 5}
failed: [hv1.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": 
true, "item": "/dev/sdb", "msg": "  Device /dev/sdb not found (or ignored by 
filtering).\n", "rc": 5}
failed: [hv2.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": 
true, "item": "/dev/sdb", "msg": "  Device /dev/sdb not found (or ignored by 
filtering).\n", "rc": 5}


But: /dev/sdb exists on all hosts

[root@hv1 ~]# lsblk
NAME MAJ:MIN RM   SIZE RO 
TYPE  MOUNTPOINT
sda8:00 136,7G  0 
disk
...
sdb            8:16   0 558,9G  0 
disk
└─3600508b1001c350a2c1748b0a0ff3860  253:50 558,9G  0 
mpath



What can i do to make this work?

___
Oliver Dietzel


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users