# df -h /home/brick1 Sist.fichs Tama Ocup Livre Uso% Montado em /dev/mapper/cl-home 1,8T 1,8T 18G 100% /home
# df -h /home2/brick2 Sist.fichs Tama Ocup Livre Uso% Montado em /dev/mapper/cl-root 50G 28G 23G 56% / If sharding is enable the restore pauses the VM with unknown storage error Thanks José De: "Strahil Nikolov" <[email protected]> Para: [email protected] Cc: "José Ferradeira via Users" <[email protected]>, "Alex McWhirter" <[email protected]> Enviadas: Segunda-feira, 14 De Junho de 2021 17:14:15 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues And what is the status of the bricks: df -h /home/brick1 /home2/brick2 When sharding is not enabled, the qcow2 disks cannot be spread between the bricks. Best Regards, Strahil Nikolov # gluster volume info data1 Volume Name: data1 Type: Distribute Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: gs.domain.pt:/home/brick1 Brick2: gs.domain.pt:/home2/brick2 Options Reconfigured: nfs.disable: on transport.address-family: inet storage.fips-mode-rchecksum: on storage.owner-uid: 36 storage.owner-gid: 36 cluster.min-free-disk: 10% performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 features.shard: off user.cifs: off cluster.choose-local: off client.event-threads: 4 server.event-threads: 4 performance.client-io-threads: on # gluster volume status data1 Status of volume: data1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gs.domain.pt:/home/brick1 49153 0 Y 1824862 Brick gs.domain.pt:/home2/brick2 49154 0 Y 1824880 Task Status of Volume data1 ------------------------------------------------------------------------------ There are no active volume tasks # gluster volume heal data1 info summary This command is supported for only volumes of replicate/disperse type. Volume data1 is not of type replicate/disperse Volume heal failed. # df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ Sist.fichs Tama Ocup Livre Uso% Montado em gs.domain.pt:/data1 1,9T 1,8T 22G 99% /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1 thanks José De: "Strahil Nikolov" <[email protected]> Para: [email protected] Cc: "José Ferradeira via Users" <[email protected]>, "Alex McWhirter" <[email protected]> Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Can you provode the output of: gluster volume info VOLUME gluster volume status VOLUME gluster volume heal VOLUME info summary df -h /rhev/data-center/mnt/glusterSD/<server>:_<volume> In pure replica volumes , the bricks should be of the same size. If not - the smallest one defines the size of the volume. If the VM has thin qcow2 disks, it will grow slowly till it reaches its maximum size or till the volume space is finished. Best Regards, Strahil Nikolov BQ_BEGIN Well, I have one brick without space, 1.8TB. In fact I don't know why, because I only have one VM on that domain storage with less the 1TB. When I try to start the VM I get this error: VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR trusted.libvirt.security.selinux on /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c: No space left on device. I'm stuck in here Thanks José De: "Strahil Nikolov" <[email protected]> Para: [email protected], "José Ferradeira via Users" <[email protected]> Cc: "Alex McWhirter" <[email protected]>, "José Ferradeira via Users" <[email protected]> Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues So, how is it going ? Do you have space ? Best Regards, Strahil Nikolov BQ_BEGIN On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov <[email protected]> wrote: You need to use thick VM disks on Gluster, which is the default behavior for a long time. Also, check all bricks' free space. Most probably you are out of space on one of the bricks (term for server + mountpoint combination). Best Regards, Strahil Nikolov BQ_BEGIN On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users <[email protected]> wrote: _______________________________________________ Users mailing list -- [email protected] To unsubscribe send an email to [email protected] Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/[email protected]/message/SC6RR2YC35OOOUEDEVSVCQ7RMW56DCSJ/ BQ_END BQ_END BQ_END
_______________________________________________ Users mailing list -- [email protected] To unsubscribe send an email to [email protected] Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/[email protected]/message/OCPUCKW4AK6SZY2GDSFYUZW2UFEYH4MI/

