[ovirt-users] Re: gluster 5834 Unsynced entries present
Thank you very much, it took a few minutes but now I don't have any more Unsynced entries. [root@ovnode2 glusterfs]# gluster volume heal datassd info | grep entries | sort | uniq -c 3 Number of entries: 0 Dominique D - Message reçu - De: Strahil Nikolov via Users (users@ovirt.org) Date: 01/10/21 11:05 À: Dominique D (dominique.desche...@gcgenicom.com), users@ovirt.org Objet: [ovirt-users] Re: gluster 5834 Unsynced entries present Put ovnode2 in maintenance (put a tick for stopping gluster), wait till all VMs evacuate and the host is really in maintenance and activate it back. Restarting the glusterd also should do the trick, but it's always better to ensure no gluster processes have been left running(inclusing the mount points. Best Regards, Strahil Nikolov On Fri, Oct 1, 2021 at 17:06, Dominique D wrote: yesterday I had a glich and my second ovnode2 server restarted here are some errors in the events : VDSM ovnode3.telecom.lan command SpmStatusVDS failed: Connection timeout for host 'ovnode3.telecom.lan', last response arrived 2455 ms ago. Host ovnode3.telecom.lan is not responding. It will stay in Connecting state for a grace period of 86 seconds and after that an attempt to fence the host will be issued. Invalid status on Data Center Default. Setting Data Center status to Non Responsive (On host ovnode3.telecom.lan, Error: Network error during communication with the Host.). Executing power management status on Host ovnode3.telecom.lan using Proxy Host ovnode1.telecom.lan and Fence Agent ipmilan:10.5.1.16. Now my 3 bricks have errors from my gluster volume [root@ovnode2 ~]# gluster volume status Status of volume: datassd Gluster process TCP Port RDMA Port Online Pid -- Brick ovnode1s.telecom.lan:/gluster_bricks/ datassd/datassd 49152 0 Y 4027 Brick ovnode2s.telecom.lan:/gluster_bricks/ datassd/datassd 49153 0 Y 2393 Brick ovnode3s.telecom.lan:/gluster_bricks/ datassd/datassd 49152 0 Y 2347 Self-heal Daemon on localhost N/A N/A Y 2405 Self-heal Daemon on ovnode3s.telecom.lan N/A N/A Y 2366 Self-heal Daemon on 172.16.70.91 N/A N/A Y 4043 Task Status of Volume datassd -- There are no active volume tasks gluster volume heal datassd info | grep -i "Number of entries:" | grep -v "entries: 0" Number of entries: 5759 in the webadmin all the bricks are green with comments for two : ovnode1 Up, 5834 Unsynced entries present ovnode2 Up, ovnode3 Up, 5820 Unsynced entries present I tried this without success gluster volume heal datassd Launching heal operation to perform index self heal on volume datassd has been unsuccessful: Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file for details. What are the next steps ? Thank you ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRI2K34O2X3NEEYLWTZJYG26EYH6CJQU/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/L4SWBS2VMHJC6JCWARQI5SHIQQJVJ6GQ/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OW7Z24SU3F3GFIWCD75SQJJS62ITIZAM/
[ovirt-users] Re: gluster 5834 Unsynced entries present
Hi Dominique, what's the output of gluster volume heal datassd info summary Regards, Paul S. From: Dominique D Sent: 01 October 2021 15:05 To: users@ovirt.org Subject: [ovirt-users] gluster 5834 Unsynced entries present Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe. yesterday I had a glich and my second ovnode2 server restarted here are some errors in the events : VDSM ovnode3.telecom.lan command SpmStatusVDS failed: Connection timeout for host 'ovnode3.telecom.lan', last response arrived 2455 ms ago. Host ovnode3.telecom.lan is not responding. It will stay in Connecting state for a grace period of 86 seconds and after that an attempt to fence the host will be issued. Invalid status on Data Center Default. Setting Data Center status to Non Responsive (On host ovnode3.telecom.lan, Error: Network error during communication with the Host.). Executing power management status on Host ovnode3.telecom.lan using Proxy Host ovnode1.telecom.lan and Fence Agent ipmilan:10.5.1.16. Now my 3 bricks have errors from my gluster volume [root@ovnode2 ~]# gluster volume status Status of volume: datassd Gluster process TCP Port RDMA Port Online Pid -- Brick ovnode1s.telecom.lan:/gluster_bricks/ datassd/datassd 49152 0 Y 4027 Brick ovnode2s.telecom.lan:/gluster_bricks/ datassd/datassd 49153 0 Y 2393 Brick ovnode3s.telecom.lan:/gluster_bricks/ datassd/datassd 49152 0 Y 2347 Self-heal Daemon on localhost N/A N/AY 2405 Self-heal Daemon on ovnode3s.telecom.lanN/A N/AY 2366 Self-heal Daemon on 172.16.70.91N/A N/AY 4043 Task Status of Volume datassd -- There are no active volume tasks gluster volume heal datassd info | grep -i "Number of entries:" | grep -v "entries: 0" Number of entries: 5759 in the webadmin all the bricks are green with comments for two : ovnode1 Up, 5834 Unsynced entries present ovnode2 Up, ovnode3 Up, 5820 Unsynced entries present I tried this without success gluster volume heal datassd Launching heal operation to perform index self heal on volume datassd has been unsuccessful: Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file for details. What are the next steps ? Thank you ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fprivacy-policy.htmldata=04%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C7e3e7202589d44541fb208d984e4cb01%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637686940626114012%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=IVUS1IiRFsB0vOTP23xT6y7ZubJ0yeJoVjP8uXxB%2FRY%3Dreserved=0 oVirt Code of Conduct: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2Fdata=04%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C7e3e7202589d44541fb208d984e4cb01%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637686940626114012%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=sIOP8O%2FX797XXV1rLhas0jQoSg%2BypYtoZZdCka37RaM%3Dreserved=0 List Archives: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2FQRI2K34O2X3NEEYLWTZJYG26EYH6CJQU%2Fdata=04%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C7e3e7202589d44541fb208d984e4cb01%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637686940626114012%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=1ogTZDbajfjOBxJY5Iudrd5QFNhDyULgAIS%2Bz1Q%2BSBw%3Dreserved=0 To view the terms under which this email is distributed, please go to:- https://leedsbeckett.ac.uk/disclaimer/email ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BSSS4YHEPN2BTCK7U6XKIT777JUGLQGF/
[ovirt-users] Re: gluster 5834 Unsynced entries present
Put ovnode2 in maintenance (put a tick for stopping gluster), wait till all VMs evacuate and the host is really in maintenance and activate it back. Restarting the glusterd also should do the trick, but it's always better to ensure no gluster processes have been left running(inclusing the mount points. Best Regards,Strahil Nikolov On Fri, Oct 1, 2021 at 17:06, Dominique D wrote: yesterday I had a glich and my second ovnode2 server restarted here are some errors in the events : VDSM ovnode3.telecom.lan command SpmStatusVDS failed: Connection timeout for host 'ovnode3.telecom.lan', last response arrived 2455 ms ago. Host ovnode3.telecom.lan is not responding. It will stay in Connecting state for a grace period of 86 seconds and after that an attempt to fence the host will be issued. Invalid status on Data Center Default. Setting Data Center status to Non Responsive (On host ovnode3.telecom.lan, Error: Network error during communication with the Host.). Executing power management status on Host ovnode3.telecom.lan using Proxy Host ovnode1.telecom.lan and Fence Agent ipmilan:10.5.1.16. Now my 3 bricks have errors from my gluster volume [root@ovnode2 ~]# gluster volume status Status of volume: datassd Gluster process TCP Port RDMA Port Online Pid -- Brick ovnode1s.telecom.lan:/gluster_bricks/ datassd/datassd 49152 0 Y 4027 Brick ovnode2s.telecom.lan:/gluster_bricks/ datassd/datassd 49153 0 Y 2393 Brick ovnode3s.telecom.lan:/gluster_bricks/ datassd/datassd 49152 0 Y 2347 Self-heal Daemon on localhost N/A N/A Y 2405 Self-heal Daemon on ovnode3s.telecom.lan N/A N/A Y 2366 Self-heal Daemon on 172.16.70.91 N/A N/A Y 4043 Task Status of Volume datassd -- There are no active volume tasks gluster volume heal datassd info | grep -i "Number of entries:" | grep -v "entries: 0" Number of entries: 5759 in the webadmin all the bricks are green with comments for two : ovnode1 Up, 5834 Unsynced entries present ovnode2 Up, ovnode3 Up, 5820 Unsynced entries present I tried this without success gluster volume heal datassd Launching heal operation to perform index self heal on volume datassd has been unsuccessful: Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file for details. What are the next steps ? Thank you ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRI2K34O2X3NEEYLWTZJYG26EYH6CJQU/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/L4SWBS2VMHJC6JCWARQI5SHIQQJVJ6GQ/