[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread Strahil Nikolov via Users
The default value is the one found via "gluster volume set help | grep -A 3 
cluster.min-free-disk". In my case it is 10%.
The best option is to use a single brick on either software/hardware raid of 
equal in performance and size disks. Size should be the same or gluster will 
write proportionally based on the storage ration between bricks (when using in 
distributed volume instead of a raid).

For example if hdd1 is 500G and hdd2 is 1T , gluster will try to put double the 
data (thus double the load) on hdd2 . Yet , spinning disks have nearly the same 
IOPS ratings and then hdd2 will slow down your volume.


For example I got 3 x 500GB SATA disks in a raid0 per each (data) node , so I 
can store slow and not so important data.

Most optimal is to use a HW raid10 with 10 disks (2-3TB each disk) , but this 
leads to increased costs and reduced storage availability.If you can afford 
SSDs/NVMEs - go for it , you will need some tuning but it will perform way 
better.

Your distributed volume is still OK, but like in this case - performance cannot 
be guaranteed and you will eventually suffer from various issues that could be 
avoided with proper design of the oVirt DC.

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 19:20:31 Гринуич+2, supo...@logicworks.pt 
 написа: 





So, in oVirt if I want a single storage, without high availability, what is the 
best solution?
Can I use gluster replica 1 for a single storage?

by the way, cluster.min-free-disk: 1% not 10% the default value. And I still 
cannot remove the disk.
I think I will destroy the volume and configure it again.

Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 17:11:14
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

Nope,

oficially oVirt supports only replica 3 (replica 3 arbiter 1) or replica 1 
(which actually is a single brick distributed ) volumes.
If you have issues related to the Gluster Volume - liek this case , the 
community support will be "best effort".

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 18:24:07 Гринуич+2,  
написа: 





But there are also distributed volume types, right?
Replicated is if you want high availability, that is not the case.

Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 15:49:22
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

You have exactly 90% used space.
The Gluster's default protection value is exactly 10%:


Option: cluster.min-free-disk
Default Value: 10%
Description: Percentage/Size of disk space, after which the process starts 
balancing out the cluster, and logs will appear in log files

I would recommend you to temporarily drop that value till you do the cleanup.

gluster volume set data cluster.min-free-disk 5%

To restore the default values of any option:
gluster volume reset data cluster.min-free-disk


P.S.: Keep in mind that if you have sparse disks for your VMs , they will be 
unpaused and start eating the last space you got left ... so be quick :)

P.S.2: I hope you know that the only supported volume types are 
'distributed-replicated' and 'replicated' :)

Best Regards,
Strahil Nikolov








В вторник, 27 октомври 2020 г., 11:02:10 Гринуич+2, supo...@logicworks.pt 
 написа: 





This is a simple installation with one storage, 3 hosts. One volume with 2 
bricks, the second brick was to get more space to try to remove the disk, but 
without success

]# df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/md127   50G  2.5G   48G   5% /
devtmpfs    7.7G 0  7.7G   0% /dev
tmpfs   7.7G   16K  7.7G   1% /dev/shm
tmpfs   7.7G   98M  7.6G   2% /run
tmpfs   7.7G 0  7.7G   0% /sys/fs/cgroup
/dev/md126 1016M  194M  822M  20% /boot
/dev/md125  411G  407G  4.1G 100% /home
gfs2.domain.com:/data 461G  414G   48G  90% 
/rhev/data-center/mnt/glusterSD/gfs2.domain.com:_data
tmpfs   1.6G 0  1.6G   0% /run/user/0


# gluster volume status data
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gfs2.domain.com:/home/brick1    49154 0  Y   8908
Brick gfs2.domain.com:/brickx 49155 0  Y   8931

Task Status of Volume data
--
There are no active volume tasks

# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snap

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread suporte
So, in oVirt if I want a single storage, without high availability, what is the 
best solution? 
Can I use gluster replica 1 for a single storage? 

by the way, cluster.min-free-disk: 1% not 10% the default value. And I still 
cannot remove the disk. 
I think I will destroy the volume and configure it again. 

Thanks 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 17:11:14 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

Nope, 

oficially oVirt supports only replica 3 (replica 3 arbiter 1) or replica 1 
(which actually is a single brick distributed ) volumes. 
If you have issues related to the Gluster Volume - liek this case , the 
community support will be "best effort". 

Best Regards, 
Strahil Nikolov 






В вторник, 27 октомври 2020 г., 18:24:07 Гринуич+2,  
написа: 





But there are also distributed volume types, right? 
Replicated is if you want high availability, that is not the case. 

Thanks 

José 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 15:49:22 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

You have exactly 90% used space. 
The Gluster's default protection value is exactly 10%: 


Option: cluster.min-free-disk 
Default Value: 10% 
Description: Percentage/Size of disk space, after which the process starts 
balancing out the cluster, and logs will appear in log files 

I would recommend you to temporarily drop that value till you do the cleanup. 

gluster volume set data cluster.min-free-disk 5% 

To restore the default values of any option: 
gluster volume reset data cluster.min-free-disk 


P.S.: Keep in mind that if you have sparse disks for your VMs , they will be 
unpaused and start eating the last space you got left ... so be quick :) 

P.S.2: I hope you know that the only supported volume types are 
'distributed-replicated' and 'replicated' :) 

Best Regards, 
Strahil Nikolov 








В вторник, 27 октомври 2020 г., 11:02:10 Гринуич+2, supo...@logicworks.pt 
 написа: 





This is a simple installation with one storage, 3 hosts. One volume with 2 
bricks, the second brick was to get more space to try to remove the disk, but 
without success 

]# df -h 
Filesystem Size Used Avail Use% Mounted on 
/dev/md127 50G 2.5G 48G 5% / 
devtmpfs 7.7G 0 7.7G 0% /dev 
tmpfs 7.7G 16K 7.7G 1% /dev/shm 
tmpfs 7.7G 98M 7.6G 2% /run 
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup 
/dev/md126 1016M 194M 822M 20% /boot 
/dev/md125 411G 407G 4.1G 100% /home 
gfs2.domain.com:/data 461G 414G 48G 90% 
/rhev/data-center/mnt/glusterSD/gfs2.domain.com:_data 
tmpfs 1.6G 0 1.6G 0% /run/user/0 


# gluster volume status data 
Status of volume: data 
Gluster process TCP Port RDMA Port Online Pid 
-- 
Brick gfs2.domain.com:/home/brick1 49154 0 Y 8908 
Brick gfs2.domain.com:/brickx 49155 0 Y 8931 

Task Status of Volume data 
-- 
There are no active volume tasks 

# gluster volume info data 

Volume Name: data 
Type: Distribute 
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: gfs2.domain.com:/home/brick1 
Brick2: gfs2.domain.com:/brickx 
Options Reconfigured: 
cluster.min-free-disk: 1% 
cluster.data-self-heal-algorithm: full 
performance.low-prio-threads: 32 
features.shard-block-size: 512MB 
features.shard: on 
storage.owner-gid: 36 
storage.owner-uid: 36 
transport.address-family: inet 
nfs.disable: on 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

So what is the output of "df" agianst : 
- all bricks in the volume (all nodes) 
- on the mount point in /rhev/mnt/ 

Usually adding a new brick (per host) in replica 3 volume should provide you 
more space. 
Also what is the status of the volume: 

gluster volume status  
gluster volume info  


Best Regards, 
Strahil Nikolov 






В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,  
написа: 





Hello, 

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message: 

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',) 

Any idea? 
Thanks 

José 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage ful

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread Strahil Nikolov via Users
Nope,

oficially oVirt supports only replica 3 (replica 3 arbiter 1) or replica 1 
(which actually is a single brick distributed ) volumes.
If you have issues related to the Gluster Volume - liek this case , the 
community support will be "best effort".

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 18:24:07 Гринуич+2,  
написа: 





But there are also distributed volume types, right?
Replicated is if you want high availability, that is not the case.

Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 15:49:22
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

You have exactly 90% used space.
The Gluster's default protection value is exactly 10%:


Option: cluster.min-free-disk
Default Value: 10%
Description: Percentage/Size of disk space, after which the process starts 
balancing out the cluster, and logs will appear in log files

I would recommend you to temporarily drop that value till you do the cleanup.

gluster volume set data cluster.min-free-disk 5%

To restore the default values of any option:
gluster volume reset data cluster.min-free-disk


P.S.: Keep in mind that if you have sparse disks for your VMs , they will be 
unpaused and start eating the last space you got left ... so be quick :)

P.S.2: I hope you know that the only supported volume types are 
'distributed-replicated' and 'replicated' :)

Best Regards,
Strahil Nikolov








В вторник, 27 октомври 2020 г., 11:02:10 Гринуич+2, supo...@logicworks.pt 
 написа: 





This is a simple installation with one storage, 3 hosts. One volume with 2 
bricks, the second brick was to get more space to try to remove the disk, but 
without success

]# df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/md127   50G  2.5G   48G   5% /
devtmpfs    7.7G 0  7.7G   0% /dev
tmpfs   7.7G   16K  7.7G   1% /dev/shm
tmpfs   7.7G   98M  7.6G   2% /run
tmpfs   7.7G 0  7.7G   0% /sys/fs/cgroup
/dev/md126 1016M  194M  822M  20% /boot
/dev/md125  411G  407G  4.1G 100% /home
gfs2.domain.com:/data 461G  414G   48G  90% 
/rhev/data-center/mnt/glusterSD/gfs2.domain.com:_data
tmpfs   1.6G 0  1.6G   0% /run/user/0


# gluster volume status data
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gfs2.domain.com:/home/brick1    49154 0  Y   8908
Brick gfs2.domain.com:/brickx 49155 0  Y   8931

Task Status of Volume data
--
There are no active volume tasks

# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gfs2.domain.com:/home/brick1
Brick2: gfs2.domain.com:/brickx
Options Reconfigured:
cluster.min-free-disk: 1%
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

So what is the output of "df" agianst :
- all bricks in the volume (all nodes)
- on the mount point in /rhev/mnt/

Usually adding a new brick (per host) in replica 3 volume should provide you 
more space.
Also what is the status of the volume:

gluster volume status 
gluster volume info  


Best Regards,
Strahil Nikolov






В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,  
написа: 





Hello,

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message:

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',)

Any idea?
Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

Any option to extend the Gluster Volume ?

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host.
Then you can start the VM , while you are recovering from the situation.

virsh -c qemu:///sy

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread suporte
But there are also distributed volume types, right? 
Replicated is if you want high availability, that is not the case. 

Thanks 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 15:49:22 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

You have exactly 90% used space. 
The Gluster's default protection value is exactly 10%: 


Option: cluster.min-free-disk 
Default Value: 10% 
Description: Percentage/Size of disk space, after which the process starts 
balancing out the cluster, and logs will appear in log files 

I would recommend you to temporarily drop that value till you do the cleanup. 

gluster volume set data cluster.min-free-disk 5% 

To restore the default values of any option: 
gluster volume reset data cluster.min-free-disk 


P.S.: Keep in mind that if you have sparse disks for your VMs , they will be 
unpaused and start eating the last space you got left ... so be quick :) 

P.S.2: I hope you know that the only supported volume types are 
'distributed-replicated' and 'replicated' :) 

Best Regards, 
Strahil Nikolov 








В вторник, 27 октомври 2020 г., 11:02:10 Гринуич+2, supo...@logicworks.pt 
 написа: 





This is a simple installation with one storage, 3 hosts. One volume with 2 
bricks, the second brick was to get more space to try to remove the disk, but 
without success 

]# df -h 
Filesystem Size Used Avail Use% Mounted on 
/dev/md127 50G 2.5G 48G 5% / 
devtmpfs 7.7G 0 7.7G 0% /dev 
tmpfs 7.7G 16K 7.7G 1% /dev/shm 
tmpfs 7.7G 98M 7.6G 2% /run 
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup 
/dev/md126 1016M 194M 822M 20% /boot 
/dev/md125 411G 407G 4.1G 100% /home 
gfs2.domain.com:/data 461G 414G 48G 90% 
/rhev/data-center/mnt/glusterSD/gfs2.domain.com:_data 
tmpfs 1.6G 0 1.6G 0% /run/user/0 


# gluster volume status data 
Status of volume: data 
Gluster process TCP Port RDMA Port Online Pid 
-- 
Brick gfs2.domain.com:/home/brick1 49154 0 Y 8908 
Brick gfs2.domain.com:/brickx 49155 0 Y 8931 

Task Status of Volume data 
-- 
There are no active volume tasks 

# gluster volume info data 

Volume Name: data 
Type: Distribute 
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: gfs2.domain.com:/home/brick1 
Brick2: gfs2.domain.com:/brickx 
Options Reconfigured: 
cluster.min-free-disk: 1% 
cluster.data-self-heal-algorithm: full 
performance.low-prio-threads: 32 
features.shard-block-size: 512MB 
features.shard: on 
storage.owner-gid: 36 
storage.owner-uid: 36 
transport.address-family: inet 
nfs.disable: on 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

So what is the output of "df" agianst : 
- all bricks in the volume (all nodes) 
- on the mount point in /rhev/mnt/ 

Usually adding a new brick (per host) in replica 3 volume should provide you 
more space. 
Also what is the status of the volume: 

gluster volume status  
gluster volume info  


Best Regards, 
Strahil Nikolov 






В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,  
написа: 





Hello, 

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message: 

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',) 

Any idea? 
Thanks 

José 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

Any option to extend the Gluster Volume ? 

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host. 
Then you can start the VM , while you are recovering from the situation. 

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml 

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy). 

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size. 

Best Regards, 
Strahil Nikolov 



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 н

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread Strahil Nikolov via Users
You have exactly 90% used space.
The Gluster's default protection value is exactly 10%:


Option: cluster.min-free-disk
Default Value: 10%
Description: Percentage/Size of disk space, after which the process starts 
balancing out the cluster, and logs will appear in log files

I would recommend you to temporarily drop that value till you do the cleanup.

gluster volume set data cluster.min-free-disk 5%

To restore the default values of any option:
gluster volume reset data cluster.min-free-disk


P.S.: Keep in mind that if you have sparse disks for your VMs , they will be 
unpaused and start eating the last space you got left ... so be quick :)

P.S.2: I hope you know that the only supported volume types are 
'distributed-replicated' and 'replicated' :)

Best Regards,
Strahil Nikolov








В вторник, 27 октомври 2020 г., 11:02:10 Гринуич+2, supo...@logicworks.pt 
 написа: 





This is a simple installation with one storage, 3 hosts. One volume with 2 
bricks, the second brick was to get more space to try to remove the disk, but 
without success

]# df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/md127   50G  2.5G   48G   5% /
devtmpfs    7.7G 0  7.7G   0% /dev
tmpfs   7.7G   16K  7.7G   1% /dev/shm
tmpfs   7.7G   98M  7.6G   2% /run
tmpfs   7.7G 0  7.7G   0% /sys/fs/cgroup
/dev/md126 1016M  194M  822M  20% /boot
/dev/md125  411G  407G  4.1G 100% /home
gfs2.domain.com:/data 461G  414G   48G  90% 
/rhev/data-center/mnt/glusterSD/gfs2.domain.com:_data
tmpfs   1.6G 0  1.6G   0% /run/user/0


# gluster volume status data
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gfs2.domain.com:/home/brick1    49154 0  Y   8908
Brick gfs2.domain.com:/brickx 49155 0  Y   8931

Task Status of Volume data
--
There are no active volume tasks

# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gfs2.domain.com:/home/brick1
Brick2: gfs2.domain.com:/brickx
Options Reconfigured:
cluster.min-free-disk: 1%
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

So what is the output of "df" agianst :
- all bricks in the volume (all nodes)
- on the mount point in /rhev/mnt/

Usually adding a new brick (per host) in replica 3 volume should provide you 
more space.
Also what is the status of the volume:

gluster volume status 
gluster volume info  


Best Regards,
Strahil Nikolov






В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,  
написа: 





Hello,

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message:

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',)

Any idea?
Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

Any option to extend the Gluster Volume ?

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host.
Then you can start the VM , while you are recovering from the situation.

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy).

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size.

Best Regards,
Strahil Nikolov



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil,

I just set cluster.min-free-disk to 1%:
# gluster volume info d

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread suporte
This is a simple installation with one storage, 3 hosts. One volume with 2 
bricks, the second brick was to get more space to try to remove the disk, but 
without success 

]# df -h 
Filesystem Size Used Avail Use% Mounted on 
/dev/md127 50G 2.5G 48G 5% / 
devtmpfs 7.7G 0 7.7G 0% /dev 
tmpfs 7.7G 16K 7.7G 1% /dev/shm 
tmpfs 7.7G 98M 7.6G 2% /run 
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup 
/dev/md126 1016M 194M 822M 20% /boot 
/dev/md125 411G 407G 4.1G 100% /home 
gfs2.domain.com:/data 461G 414G 48G 90% 
/rhev/data-center/mnt/glusterSD/gfs2.domain.com:_data 
tmpfs 1.6G 0 1.6G 0% /run/user/0 


# gluster volume status data 
Status of volume: data 
Gluster process TCP Port RDMA Port Online Pid 
-- 
Brick gfs2.domain.com:/home/brick1 49154 0 Y 8908 
Brick gfs2.domain.com:/brickx 49155 0 Y 8931 

Task Status of Volume data 
-- 
There are no active volume tasks 

# gluster volume info data 

Volume Name: data 
Type: Distribute 
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: gfs2.domain.com:/home/brick1 
Brick2: gfs2.domain.com:/brickx 
Options Reconfigured: 
cluster.min-free-disk: 1% 
cluster.data-self-heal-algorithm: full 
performance.low-prio-threads: 32 
features.shard-block-size: 512MB 
features.shard: on 
storage.owner-gid: 36 
storage.owner-uid: 36 
transport.address-family: inet 
nfs.disable: on 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

So what is the output of "df" agianst : 
- all bricks in the volume (all nodes) 
- on the mount point in /rhev/mnt/ 

Usually adding a new brick (per host) in replica 3 volume should provide you 
more space. 
Also what is the status of the volume: 

gluster volume status  
gluster volume info  


Best Regards, 
Strahil Nikolov 






В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,  
написа: 





Hello, 

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message: 

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',) 

Any idea? 
Thanks 

José 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

Any option to extend the Gluster Volume ? 

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host. 
Then you can start the VM , while you are recovering from the situation. 

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml 

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy). 

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size. 

Best Regards, 
Strahil Nikolov 



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil, 

I just set cluster.min-free-disk to 1%: 
# gluster volume info data 

Volume Name: data 
Type: Distribute 
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 1 
Transport-type: tcp 
Bricks: 
Brick1: node2.domain.com:/home/brick1 
Options Reconfigured: 
cluster.min-free-disk: 1% 
cluster.data-self-heal-algorithm: full 
performance.low-prio-threads: 32 
features.shard-block-size: 512MB 
features.shard: on 
storage.owner-gid: 36 
storage.owner-uid: 36 
transport.address-family: inet 
nfs.disable: on 

But still get the same error: Error while executing action: Cannot move Virtual 
Disk. Low disk space on Storage Domain 
I restarted the glusterfs volume. 
But I can not do anything with the VM disk. 


I know that filling the bricks is very bad, we lost access to the VM. I think 
there should be a mechanism to prevent stopping the VM. 
we should continue to have access to the VM to free some space. 

If you have a VM with a Thin Provision disk, if the VM fills the entire disk, 
we got the same problem. 

Any idea? 

Thanks 

José 



 
De: "Strahil Nikolov"  
Para: "users" , supo...@logicworks.pt 
Enviadas: Segunda-feira, 21 De Setembro de 2020 21:28:10 
Assu

[ovirt-users] Re: Gluster Domain Storage full

2020-10-26 Thread Strahil Nikolov via Users
So what is the output of "df" agianst :
- all bricks in the volume (all nodes)
- on the mount point in /rhev/mnt/

Usually adding a new brick (per host) in replica 3 volume should provide you 
more space.
Also what is the status of the volume:

gluster volume status 
gluster volume info  


Best Regards,
Strahil Nikolov






В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,  
написа: 





Hello,

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message:

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',)

Any idea?
Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

Any option to extend the Gluster Volume ?

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host.
Then you can start the VM , while you are recovering from the situation.

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy).

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size.

Best Regards,
Strahil Nikolov



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil,

I just set cluster.min-free-disk to 1%:
# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: node2.domain.com:/home/brick1
Options Reconfigured:
cluster.min-free-disk: 1%
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on

But still get the same error: Error while executing action: Cannot move Virtual 
Disk. Low disk space on Storage Domain
I restarted the glusterfs volume.
But I can not do anything with the VM disk.


I know that filling the bricks is very bad, we lost access to the VM. I think 
there should be a mechanism to prevent stopping the VM.
we should continue to have access to the VM to free some space.

If you have a VM with a Thin Provision disk, if the VM fills the entire disk, 
we got the same problem.

Any idea?

Thanks

José




De: "Strahil Nikolov" 
Para: "users" , supo...@logicworks.pt
Enviadas: Segunda-feira, 21 De Setembro de 2020 21:28:10
Assunto: Re: [ovirt-users] Gluster Domain Storage full

Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume 
option.
You can power off the VM , then set cluster.min-free-disk
to 1% and immediately move any of the VM's disks to another storage domain.

Keep in mind that filling your bricks is bad and if you eat that reserve , the 
only option would be to try to export the VM as OVA and then wipe from current 
storage and import in a bigger storage domain.

Of course it would be more sensible to just expand the gluster volume (either 
scale-up the bricks -> add more disks, or scale-out -> adding more servers with 
disks on them), but I guess that is not an option - right ?

Best Regards,
Strahil Nikolov








В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving 
only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 
0GB available and 100% used

I can not do anything with this disk, for example, if I try to move it to 
another Gluster Domain Storage get the message:

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain

Any idea?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFN2VOQZPPVCGXA

[ovirt-users] Re: Gluster Domain Storage full

2020-10-18 Thread Strahil Nikolov via Users
What is the output of:
df -h /rhev/data-center/mnt/glusterSD/server_volume/

gluster volume status volume
gluster volume info volume

In the "df" you should see the new space or otherwise you won't be able to do 
anything.

Best Regards,
Strahil Nikolov






В четвъртък, 15 октомври 2020 г., 17:04:58 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message:

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',)

Any idea?
Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

Any option to extend the Gluster Volume ?

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host.
Then you can start the VM , while you are recovering from the situation.

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy).

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size.

Best Regards,
Strahil Nikolov



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil,

I just set cluster.min-free-disk to 1%:
# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: node2.domain.com:/home/brick1
Options Reconfigured:
cluster.min-free-disk: 1%
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on

But still get the same error: Error while executing action: Cannot move Virtual 
Disk. Low disk space on Storage Domain
I restarted the glusterfs volume.
But I can not do anything with the VM disk.


I know that filling the bricks is very bad, we lost access to the VM. I think 
there should be a mechanism to prevent stopping the VM.
we should continue to have access to the VM to free some space.

If you have a VM with a Thin Provision disk, if the VM fills the entire disk, 
we got the same problem.

Any idea?

Thanks

José




De: "Strahil Nikolov" 
Para: "users" , supo...@logicworks.pt
Enviadas: Segunda-feira, 21 De Setembro de 2020 21:28:10
Assunto: Re: [ovirt-users] Gluster Domain Storage full

Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume 
option.
You can power off the VM , then set cluster.min-free-disk
to 1% and immediately move any of the VM's disks to another storage domain.

Keep in mind that filling your bricks is bad and if you eat that reserve , the 
only option would be to try to export the VM as OVA and then wipe from current 
storage and import in a bigger storage domain.

Of course it would be more sensible to just expand the gluster volume (either 
scale-up the bricks -> add more disks, or scale-out -> adding more servers with 
disks on them), but I guess that is not an option - right ?

Best Regards,
Strahil Nikolov








В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving 
only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 
0GB available and 100% used

I can not do anything with this disk, for example, if I try to move it to 
another Gluster Domain Storage get the message:

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain

Any idea?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFN2VOQZPPVCGXAIFEYVIDEVJEUCSWY7/
_

[ovirt-users] Re: Gluster Domain Storage full

2020-10-15 Thread suporte
Hello, 

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message: 

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',) 

Any idea? 
Thanks 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

Any option to extend the Gluster Volume ? 

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host. 
Then you can start the VM , while you are recovering from the situation. 

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml 

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy). 

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size. 

Best Regards, 
Strahil Nikolov 



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil, 

I just set cluster.min-free-disk to 1%: 
# gluster volume info data 

Volume Name: data 
Type: Distribute 
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 1 
Transport-type: tcp 
Bricks: 
Brick1: node2.domain.com:/home/brick1 
Options Reconfigured: 
cluster.min-free-disk: 1% 
cluster.data-self-heal-algorithm: full 
performance.low-prio-threads: 32 
features.shard-block-size: 512MB 
features.shard: on 
storage.owner-gid: 36 
storage.owner-uid: 36 
transport.address-family: inet 
nfs.disable: on 

But still get the same error: Error while executing action: Cannot move Virtual 
Disk. Low disk space on Storage Domain 
I restarted the glusterfs volume. 
But I can not do anything with the VM disk. 


I know that filling the bricks is very bad, we lost access to the VM. I think 
there should be a mechanism to prevent stopping the VM. 
we should continue to have access to the VM to free some space. 

If you have a VM with a Thin Provision disk, if the VM fills the entire disk, 
we got the same problem. 

Any idea? 

Thanks 

José 



 
De: "Strahil Nikolov"  
Para: "users" , supo...@logicworks.pt 
Enviadas: Segunda-feira, 21 De Setembro de 2020 21:28:10 
Assunto: Re: [ovirt-users] Gluster Domain Storage full 

Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume 
option. 
You can power off the VM , then set cluster.min-free-disk 
to 1% and immediately move any of the VM's disks to another storage domain. 

Keep in mind that filling your bricks is bad and if you eat that reserve , the 
only option would be to try to export the VM as OVA and then wipe from current 
storage and import in a bigger storage domain. 

Of course it would be more sensible to just expand the gluster volume (either 
scale-up the bricks -> add more disks, or scale-out -> adding more servers with 
disks on them), but I guess that is not an option - right ? 

Best Regards, 
Strahil Nikolov 








В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello, 

I'm running oVirt Version 4.3.4.3-1.el7. 
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving 
only one VM. 
The VM filled all the Domain storage. 
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 
0GB available and 100% used 

I can not do anything with this disk, for example, if I try to move it to 
another Gluster Domain Storage get the message: 

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain 

Any idea? 

Thanks 

-- 
 
Jose Ferradeira 
http://www.logicworks.pt 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFN2VOQZPPVCGXAIFEYVIDEVJEUCSWY7/
 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 

[ovirt-users] Re: Gluster Domain Storage full

2020-09-22 Thread Strahil Nikolov via Users
Any option to extend the Gluster Volume ?

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host.
Then you can start the VM , while you are recovering from the situation.

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy).

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size.

Best Regards,
Strahil Nikolov



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil,

I just set cluster.min-free-disk to 1%:
# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: node2.domain.com:/home/brick1
Options Reconfigured:
cluster.min-free-disk: 1%
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on

But still get the same error: Error while executing action: Cannot move Virtual 
Disk. Low disk space on Storage Domain
I restarted the glusterfs volume.
But I can not do anything with the VM disk.


I know that filling the bricks is very bad, we lost access to the VM. I think 
there should be a mechanism to prevent stopping the VM.
we should continue to have access to the VM to free some space.

If you have a VM with a Thin Provision disk, if the VM fills the entire disk, 
we got the same problem.

Any idea?

Thanks

José




De: "Strahil Nikolov" 
Para: "users" , supo...@logicworks.pt
Enviadas: Segunda-feira, 21 De Setembro de 2020 21:28:10
Assunto: Re: [ovirt-users] Gluster Domain Storage full

Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume 
option.
You can power off the VM , then set cluster.min-free-disk
to 1% and immediately move any of the VM's disks to another storage domain.

Keep in mind that filling your bricks is bad and if you eat that reserve , the 
only option would be to try to export the VM as OVA and then wipe from current 
storage and import in a bigger storage domain.

Of course it would be more sensible to just expand the gluster volume (either 
scale-up the bricks -> add more disks, or scale-out -> adding more servers with 
disks on them), but I guess that is not an option - right ?

Best Regards,
Strahil Nikolov








В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving 
only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 
0GB available and 100% used

I can not do anything with this disk, for example, if I try to move it to 
another Gluster Domain Storage get the message:

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain

Any idea?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFN2VOQZPPVCGXAIFEYVIDEVJEUCSWY7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIJUP2HZIWRSQHN4XU3BGGT2ZDKEVJZ3/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBAJWBN3QSKWEPWVP4DIL7OGNTASVZLP/


[ovirt-users] Re: Gluster Domain Storage full

2020-09-22 Thread suporte
Hello Strahil, 

I just set cluster.min-free-disk to 1%: 
# gluster volume info data 

Volume Name: data 
Type: Distribute 
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 1 
Transport-type: tcp 
Bricks: 
Brick1: node2.domain.com:/home/brick1 
Options Reconfigured: 
cluster.min-free-disk: 1% 
cluster.data-self-heal-algorithm: full 
performance.low-prio-threads: 32 
features.shard-block-size: 512MB 
features.shard: on 
storage.owner-gid: 36 
storage.owner-uid: 36 
transport.address-family: inet 
nfs.disable: on 

But still get the same error: Error while executing action: Cannot move Virtual 
Disk. Low disk space on Storage Domain 
I restarted the glusterfs volume. 
But I can not do anything with the VM disk. 


I know that filling the bricks is very bad, we lost access to the VM. I think 
there should be a mechanism to prevent stopping the VM. 
we should continue to have access to the VM to free some space. 

If you have a VM with a Thin Provision disk, if the VM fills the entire disk, 
we got the same problem. 

Any idea? 

Thanks 

José 




De: "Strahil Nikolov"  
Para: "users" , supo...@logicworks.pt 
Enviadas: Segunda-feira, 21 De Setembro de 2020 21:28:10 
Assunto: Re: [ovirt-users] Gluster Domain Storage full 

Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume 
option. 
You can power off the VM , then set cluster.min-free-disk 
to 1% and immediately move any of the VM's disks to another storage domain. 

Keep in mind that filling your bricks is bad and if you eat that reserve , the 
only option would be to try to export the VM as OVA and then wipe from current 
storage and import in a bigger storage domain. 

Of course it would be more sensible to just expand the gluster volume (either 
scale-up the bricks -> add more disks, or scale-out -> adding more servers with 
disks on them), but I guess that is not an option - right ? 

Best Regards, 
Strahil Nikolov 








В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello, 

I'm running oVirt Version 4.3.4.3-1.el7. 
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving 
only one VM. 
The VM filled all the Domain storage. 
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 
0GB available and 100% used 

I can not do anything with this disk, for example, if I try to move it to 
another Gluster Domain Storage get the message: 

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain 

Any idea? 

Thanks 

-- 
 
Jose Ferradeira 
http://www.logicworks.pt 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFN2VOQZPPVCGXAIFEYVIDEVJEUCSWY7/
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIJUP2HZIWRSQHN4XU3BGGT2ZDKEVJZ3/


[ovirt-users] Re: Gluster Domain Storage full

2020-09-21 Thread Strahil Nikolov via Users
Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume 
option.
You can power off the VM , then set cluster.min-free-disk
to 1% and immediately move any of the VM's disks to another storage domain.

Keep in mind that filling your bricks is bad and if you eat that reserve , the 
only option would be to try to export the VM as OVA and then wipe from current 
storage and import in a bigger storage domain.

Of course it would be more sensible to just expand the gluster volume (either 
scale-up the bricks -> add more disks, or scale-out -> adding more servers with 
disks on them), but I guess that is not an option - right ?

Best Regards,
Strahil Nikolov








В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving 
only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 
0GB available and 100% used

I can not do anything with this disk, for example, if I try to move it to 
another Gluster Domain Storage get the message:

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain

Any idea?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFN2VOQZPPVCGXAIFEYVIDEVJEUCSWY7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4XQYUD2DGE4CBMYXEWRKMOYBSEGW4Y2O/