[ovirt-users] Re: Gluster Volumes - Correct Peer Connection
I think this is the issue. When HCI deployed nodes. and consumed the drives and setup "engine" "data" and "vmstore" The GUI was set for "storage" network via hostsnames correctly. And I think, based on watching replication traffic it is using 10Gb "storage" network. CLI shows peers on that LAN also. BUT.. oVirt keeps refering to nodes as the "hostname" EX: thor.penguinpages.local (ovirtmanagment lan 1Gb 172.16.100.0/24) vs thorst.penguinpages.local (storage lan 10Gb 172.16.101.0/24) We see this error when I notice a brick having replication issue. Which the CLI does not show. but that is a different topic :) # node "medusa" showing three unsynced files. Select brick ->reset But when I say "reset brick" to restart its replication I get error Error while executing action Start Gluster Volume Reset Brick: Volume reset brick start failed: rc=-1 out=() err=['brick: medusa_penguinpages_local:/gluster_bricks/vmstore/vmstore does not exist in volume: vmstore'] ## the real brick name is [root@odin ~]# gluster volume status vmstore Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid -- Brick thorst.penguinpages.local:/gluster_br icks/vmstore/vmstore49155 0 Y 14179 Brick odinst.penguinpages.local:/gluster_br icks/vmstore/vmstore49156 0 Y 8776 Brick medusast.penguinpages.local:/gluster_ bricks/vmstore/vmstore 49156 0 Y 11985 Self-heal Daemon on localhost N/A N/AY 2698 Self-heal Daemon on thorst.penguinpages.loc al N/A N/AY 14256 Self-heal Daemon on medusast.penguinpages.l ocalN/A N/AY 12363 Task Status of Volume vmstore -- There are no active volume tasks ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/7U2KOVJ4HQ26TKD46XCWYA6QTFFCEQNN/
[ovirt-users] Re: Gluster Volumes - Correct Peer Connection
If you can do it from cli - use the cli as it has far more control over what the UI can provide. Usually I use UI for monitoring and basic stuff like starting/stopping the brick or setting the 'virt'group via the 'optimize for Virt' (or whatever it was called). Best Regards, Strahil Nikolov В сряда, 30 септември 2020 г., 19:48:21 Гринуич+3, penguin pages написа: I have a network called "Storage" but not called "gluster logical network" Front end 172.16.100.0/24 for mgmt and vms (1Gb) "ovirtmgmt" Back end 172.16.101.0/24 for storage (10Gb) "Storage" and yes.. I was never able to figure out how to us UI to create bricks.. so I just was bad and went to CLI and made them. But would be valuable to learn oVirt "Best Practice" way... though the HCI wizard setup SHOULD have done this in that the wizard allows and I supplied font vs back end. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5COXTYU3QRPIU4YP3OTSMTA7Y4E5UGEV/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6WIZ6XRRTS6GDGPHHD7YWH3HTTULVUGD/
[ovirt-users] Re: Gluster Volumes - Correct Peer Connection
I have a network called "Storage" but not called "gluster logical network" Front end 172.16.100.0/24 for mgmt and vms (1Gb) "ovirtmgmt" Back end 172.16.101.0/24 for storage (10Gb) "Storage" and yes.. I was never able to figure out how to us UI to create bricks.. so I just was bad and went to CLI and made them. But would be valuable to learn oVirt "Best Practice" way... though the HCI wizard setup SHOULD have done this in that the wizard allows and I supplied font vs back end. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5COXTYU3QRPIU4YP3OTSMTA7Y4E5UGEV/
[ovirt-users] Re: Gluster Volumes - Correct Peer Connection
Hi Jeremey, I think the problem is you have not created a "gluster logical network" from ovirt manager. So when the bricks are listing because you have only mgmt network it's mapping with that network Could you please confirm whether you have Gluster logical network created which maps with 10G nic? If not please create and check, it should solve the issue. On Thu, Sep 24, 2020 at 3:24 PM Ritesh Chikatwar wrote: > Jermy, > > This looks like a bug. > You are using an IPv4 or IPv6 network. > > Ritesh > > On Thu, Sep 24, 2020 at 12:14 PM Gobinda Das wrote: > >> But I think this only sync gluster brick status not the entier object. >> Looks like this a bug. >> @Ritesh Chikatwar Could you please check what data >> we are getting from vdsm during gluster sync job run? Are we saving exact >> data or customizing anything? >> >> On Thu, Sep 24, 2020 at 11:01 AM Gobinda Das wrote: >> >>> We do have gluster volume UI sync issue and this is fixed in ovirt-4.4.2 >>> BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1860775 >>> >>> >>> On Wed, Sep 23, 2020 at 8:50 PM Jeremey Wise >>> wrote: >>> I just noticed when HCI setup bult the gluster engine / data / vmstore volumes... it did use correctly the definition of 10Gb "back end" interfaces / hosts. But.. oVirt Engine is NOT referencing this. it lists bricks as 1Gb "managment / host" interfaces. Is this a GUI issue? I doubt this and how do I correct it? ### Data Volume Example Name: data Volume ID: 0ae7b487-8b87-4192-bd30-621d445902fe Volume Type: Replicate Replica Count: 3 Number of Bricks: 3 Transport Types: TCP Maximum no of snapshots: 256 Capacity: 999.51 GiB total, 269.02 GiB used, 730.49 GiB free, 297.91 GiB Guaranteed free, 78 Deduplication/Compression savings (%) medusa.penguinpages.local medusa.penguinpages.local:/gluster_bricks/data/data 25% OK odin.penguinpages.local odin.penguinpages.local:/gluster_bricks/data/data 25% OK thor.penguinpages.local thor.penguinpages.local:/gluster_bricks/data/data 25% OK # I have storage back end of 172.16.101.x which is 10Gb dedicated for replication. Peers reflect this [root@odin c4918f28-00ce-49f9-91c8-224796a158b9]# gluster peer status Number of Peers: 2 Hostname: thorst.penguinpages.local Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9 State: Peer in Cluster (Connected) Hostname: medusast.penguinpages.local Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031 State: Peer in Cluster (Connected) [root@odin c4918f28-00ce-49f9-91c8-224796a158b9]# -- p enguinpages ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ULE66KK5UEGM5GTRG6IMWZLUEI6JLHVI/ >>> >>> >>> -- >>> >>> >>> Thanks, >>> Gobinda >>> >> >> >> -- >> >> >> Thanks, >> Gobinda >> > -- Thanks, Gobinda ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4RIO4WAPNDI2NY73KXRIMFRXVFX7ASV/
[ovirt-users] Re: Gluster Volumes - Correct Peer Connection
Jermy, This looks like a bug. You are using an IPv4 or IPv6 network. Ritesh On Thu, Sep 24, 2020 at 12:14 PM Gobinda Das wrote: > But I think this only sync gluster brick status not the entier object. > Looks like this a bug. > @Ritesh Chikatwar Could you please check what data > we are getting from vdsm during gluster sync job run? Are we saving exact > data or customizing anything? > > On Thu, Sep 24, 2020 at 11:01 AM Gobinda Das wrote: > >> We do have gluster volume UI sync issue and this is fixed in ovirt-4.4.2 >> BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1860775 >> >> >> On Wed, Sep 23, 2020 at 8:50 PM Jeremey Wise >> wrote: >> >>> >>> I just noticed when HCI setup bult the gluster engine / data / vmstore >>> volumes... it did use correctly the definition of 10Gb "back end" >>> interfaces / hosts. >>> >>> But.. oVirt Engine is NOT referencing this. >>> it lists bricks as 1Gb "managment / host" interfaces. Is this a GUI >>> issue? I doubt this and how do I correct it? >>> ### Data Volume Example >>> Name: >>> data >>> Volume ID: >>> 0ae7b487-8b87-4192-bd30-621d445902fe >>> Volume Type: >>> Replicate >>> Replica Count: >>> 3 >>> Number of Bricks: >>> 3 >>> Transport Types: >>> TCP >>> Maximum no of snapshots: >>> 256 >>> Capacity: >>> 999.51 GiB total, 269.02 GiB used, 730.49 GiB free, 297.91 GiB >>> Guaranteed free, 78 Deduplication/Compression savings (%) >>> >>> >>> medusa.penguinpages.local >>> medusa.penguinpages.local:/gluster_bricks/data/data >>> 25% >>> OK >>> odin.penguinpages.local >>> odin.penguinpages.local:/gluster_bricks/data/data >>> 25% >>> OK >>> thor.penguinpages.local >>> thor.penguinpages.local:/gluster_bricks/data/data >>> 25% >>> OK >>> >>> >>> # I have storage back end of 172.16.101.x which is 10Gb dedicated for >>> replication. Peers reflect this >>> [root@odin c4918f28-00ce-49f9-91c8-224796a158b9]# gluster peer status >>> Number of Peers: 2 >>> >>> Hostname: thorst.penguinpages.local >>> Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9 >>> State: Peer in Cluster (Connected) >>> >>> Hostname: medusast.penguinpages.local >>> Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031 >>> State: Peer in Cluster (Connected) >>> [root@odin c4918f28-00ce-49f9-91c8-224796a158b9]# >>> >>> >>> >>> -- >>> p enguinpages >>> ___ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-le...@ovirt.org >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html >>> oVirt Code of Conduct: >>> https://www.ovirt.org/community/about/community-guidelines/ >>> List Archives: >>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ULE66KK5UEGM5GTRG6IMWZLUEI6JLHVI/ >>> >> >> >> -- >> >> >> Thanks, >> Gobinda >> > > > -- > > > Thanks, > Gobinda > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NBLRITVXE7O6JZV4OOUTVGZUDM62KYV7/
[ovirt-users] Re: Gluster Volumes - Correct Peer Connection
But I think this only sync gluster brick status not the entier object. Looks like this a bug. @Ritesh Chikatwar Could you please check what data we are getting from vdsm during gluster sync job run? Are we saving exact data or customizing anything? On Thu, Sep 24, 2020 at 11:01 AM Gobinda Das wrote: > We do have gluster volume UI sync issue and this is fixed in ovirt-4.4.2 > BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1860775 > > > On Wed, Sep 23, 2020 at 8:50 PM Jeremey Wise > wrote: > >> >> I just noticed when HCI setup bult the gluster engine / data / vmstore >> volumes... it did use correctly the definition of 10Gb "back end" >> interfaces / hosts. >> >> But.. oVirt Engine is NOT referencing this. >> it lists bricks as 1Gb "managment / host" interfaces. Is this a GUI >> issue? I doubt this and how do I correct it? >> ### Data Volume Example >> Name: >> data >> Volume ID: >> 0ae7b487-8b87-4192-bd30-621d445902fe >> Volume Type: >> Replicate >> Replica Count: >> 3 >> Number of Bricks: >> 3 >> Transport Types: >> TCP >> Maximum no of snapshots: >> 256 >> Capacity: >> 999.51 GiB total, 269.02 GiB used, 730.49 GiB free, 297.91 GiB Guaranteed >> free, 78 Deduplication/Compression savings (%) >> >> >> medusa.penguinpages.local >> medusa.penguinpages.local:/gluster_bricks/data/data >> 25% >> OK >> odin.penguinpages.local >> odin.penguinpages.local:/gluster_bricks/data/data >> 25% >> OK >> thor.penguinpages.local >> thor.penguinpages.local:/gluster_bricks/data/data >> 25% >> OK >> >> >> # I have storage back end of 172.16.101.x which is 10Gb dedicated for >> replication. Peers reflect this >> [root@odin c4918f28-00ce-49f9-91c8-224796a158b9]# gluster peer status >> Number of Peers: 2 >> >> Hostname: thorst.penguinpages.local >> Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9 >> State: Peer in Cluster (Connected) >> >> Hostname: medusast.penguinpages.local >> Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031 >> State: Peer in Cluster (Connected) >> [root@odin c4918f28-00ce-49f9-91c8-224796a158b9]# >> >> >> >> -- >> p enguinpages >> ___ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-le...@ovirt.org >> Privacy Statement: https://www.ovirt.org/privacy-policy.html >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ULE66KK5UEGM5GTRG6IMWZLUEI6JLHVI/ >> > > > -- > > > Thanks, > Gobinda > -- Thanks, Gobinda ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHY6J3J5LR33BFXQH3XPRZRX4KYN4B72/
[ovirt-users] Re: Gluster Volumes - Correct Peer Connection
We do have gluster volume UI sync issue and this is fixed in ovirt-4.4.2 BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1860775 On Wed, Sep 23, 2020 at 8:50 PM Jeremey Wise wrote: > > I just noticed when HCI setup bult the gluster engine / data / vmstore > volumes... it did use correctly the definition of 10Gb "back end" > interfaces / hosts. > > But.. oVirt Engine is NOT referencing this. > it lists bricks as 1Gb "managment / host" interfaces. Is this a GUI > issue? I doubt this and how do I correct it? > ### Data Volume Example > Name: > data > Volume ID: > 0ae7b487-8b87-4192-bd30-621d445902fe > Volume Type: > Replicate > Replica Count: > 3 > Number of Bricks: > 3 > Transport Types: > TCP > Maximum no of snapshots: > 256 > Capacity: > 999.51 GiB total, 269.02 GiB used, 730.49 GiB free, 297.91 GiB Guaranteed > free, 78 Deduplication/Compression savings (%) > > > medusa.penguinpages.local > medusa.penguinpages.local:/gluster_bricks/data/data > 25% > OK > odin.penguinpages.local > odin.penguinpages.local:/gluster_bricks/data/data > 25% > OK > thor.penguinpages.local > thor.penguinpages.local:/gluster_bricks/data/data > 25% > OK > > > # I have storage back end of 172.16.101.x which is 10Gb dedicated for > replication. Peers reflect this > [root@odin c4918f28-00ce-49f9-91c8-224796a158b9]# gluster peer status > Number of Peers: 2 > > Hostname: thorst.penguinpages.local > Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9 > State: Peer in Cluster (Connected) > > Hostname: medusast.penguinpages.local > Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031 > State: Peer in Cluster (Connected) > [root@odin c4918f28-00ce-49f9-91c8-224796a158b9]# > > > > -- > p enguinpages > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/ULE66KK5UEGM5GTRG6IMWZLUEI6JLHVI/ > -- Thanks, Gobinda ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PB5WWLTTUJGXNPKLNXMY2K5GXJCYQNSB/