Opened: *https://bugzilla.redhat.com/show_bug.cgi?id=1720261
<https://bugzilla.redhat.com/show_bug.cgi?id=1720261> *hopefully I put it
in the correct section.

Regards

Adrian

On Thu, Jun 13, 2019 at 12:05 AM Strahil <[email protected]> wrote:

> Better raise a bug in bugilla.redhat.com and mention the working 4.3.3
>
> Best Regards,
> Strahil NikolovOn Jun 13, 2019 04:30, [email protected] wrote:
> >
> > While trying to do a hyperconverged setup and trying to use "configure
> LV Cache" /dev/sdf the deployment fails. If I dont use the LV cache SSD
> Disk the setup succeds, thought you mighg want to know, for now I retested
> with 4.3.3 and all worked fine, so reverting to 4.3.3 unless you know of a
> workaround?
> >
> > Error:
> > TASK [gluster.infra/roles/backend_setup : Extend volume group]
> *****************
> > failed: [vmm11.mydomain.com] (item={u'vgname': u'gluster_vg_sdb',
> u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname':
> u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf',
> u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode':
> u'writethrough', u'cachemetalvsize': u'0.1G', u'cachelvsize': u'0.9G'}) =>
> {"ansible_loop_var": "item", "changed": false, "err": "  Physical volume
> \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf",
> "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize":
> "0.9G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb",
> "cachemetalvsize": "0.1G", "cachemode": "writethrough",
> "cachethinpoolname": "gluster_thinpool_gluster_vg_sdb", "vgname":
> "gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by /dev/sdb.",
> "rc": 5}
> >
> > failed: [vmm12.mydomain.com] (item={u'vgname': u'gluster_vg_sdb',
> u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname':
> u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf',
> u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode':
> u'writethrough', u'cachemetalvsize': u'0.1G', u'cachelvsize': u'0.9G'}) =>
> {"ansible_loop_var": "item", "changed": false, "err": "  Physical volume
> \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf",
> "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize":
> "0.9G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb",
> "cachemetalvsize": "0.1G", "cachemode": "writethrough",
> "cachethinpoolname": "gluster_thinpool_gluster_vg_sdb", "vgname":
> "gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by /dev/sdb.",
> "rc": 5}
> >
> > failed: [vmm10.mydomain.com] (item={u'vgname': u'gluster_vg_sdb',
> u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname':
> u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf',
> u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode':
> u'writethrough', u'cachemetalvsize': u'30G', u'cachelvsize': u'270G'}) =>
> {"ansible_loop_var": "item", "changed": false, "err": "  Physical volume
> \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf",
> "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize":
> "270G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb",
> "cachemetalvsize": "30G", "cachemode": "writethrough", "cachethinpoolname":
> "gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg":
> "Unable to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5}
> >
> > PLAY RECAP
> *********************************************************************
> > vmm10.mydomain.com           : ok=13   changed=4    unreachable=0
> failed=1    skipped=10   rescued=0    ignored=0
> > vmm11.mydomain.com           : ok=13   changed=4    unreachable=0
> failed=1    skipped=10   rescued=0    ignored=0
> > vmm12.mydomain.com           : ok=13   changed=4    unreachable=0
> failed=1    skipped=10   rescued=0    ignored=0
> >
> >
> >
> >
> ---------------------------------------------------------------------------------------------------------------------
>
> > #cat /etc/ansible/hc_wizard_inventory.yml
> >
> ---------------------------------------------------------------------------------------------------------------------
>
> > hc_nodes:
> >   hosts:
> >     vmm10.mydomain.com:
> >       gluster_infra_volume_gro



-- 
Adrian Quintero
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/7TFOGXRYRRJRLWJJDAJWZLAYGB3APXOA/

Reply via email to