[ovirt-users] Re: how to forcibly remove dead node from Gluster 3 replica 1 arbiter setup?

2021-11-10 Thread dhanaraj.ramesh--- via Users
Hi Strahil Nikolov

Thank you for the suggestion but it does not help... 



[root@beclovkvma01 ~]# sudo gluster volume remove-brick datastore1 replica 1 
beclovkvma02.bec..net:/data/brick2/brick2  
beclovkvma02.bec..net:/data/brick3/brick3  
beclovkvma02.bec..net:/data/brick4/brick4  
beclovkvma02.bec..net:/data/brick5/brick5  
beclovkvma02.bec..net:/data/brick6/brick6  
beclovkvma02.bec..net:/data/brick7/brick7 
beclovkvma02.bec..net:/data/brick8/brick8 force
Remove-brick force will not migrate files from the removed bricks, so they will 
no longer be available on the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: failed: need 14(xN) bricks for reducing 
replica count of the volume from 3 to 1
[root@beclovkvma01 ~]# sudo gluster volume remove-brick datastore1 replica 1 
beclovkvma02.bec..net:/data/brick2/brick2  forceRemove-brick force will not 
migrate files from the removed bricks, so they will no longer be available on 
the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: failed: need 14(xN) bricks for reducing 
replica count of the volume from 3 to 1
[root@beclovkvma01 ~]# sudo gluster volume remove-brick datastore1 replica 2 
beclovkvma02.bec..net:/data/brick2/brick2  force
Remove-brick force will not migrate files from the removed bricks, so they will 
no longer be available on the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: failed: need 7(xN) bricks for reducing 
replica count of the volume from 3 to 2
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7M7IZI7AP7NWDNMXBXNYZZSSY64UTPMC/


[ovirt-users] Re: cloning a VM or creating a template speed is so so slow

2021-11-10 Thread Pascal D
-o  preallocation=metadata brings it down to 7m40s
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XTYIU2BFB6E22DUZVGZISJ7K2SJJCYS7/


[ovirt-users] cloning a VM or creating a template speed is so so slow

2021-11-10 Thread Pascal D
I have been trying to figure out why cloning a VM and creating a template from 
ovirt is so slow. I am using ovirt 4.3.10 over NFS. My NFS server is running 
NFS 4 over RAID10 with SSD disks over a 10G network and 9000 MTU

Therocially I should be writing a 50GB file in around 1m30s
a direct copy from the SPM host server of an image to another image on the same 
host takes 6m34s
a cloning from ovirt takes around 29m

So quite a big difference. Therefore I started investigating and found that 
ovirt launches a qemu-img process with no source and target cache. Therefore 
thinking that could be the issue, I change the cache mode to writeback and was 
able to run the exact command in 8m14s. Over 3 times faster. I haven't tried 
yet other parameters line -o preallocation=metadata but was wondering why no 
cache was selected and how to change it to use cache writeback

command launched by ovirt:
 /usr/bin/qemu-img convert -p -t none -T none -f qcow2 
/rhev/data-center/mnt/nas1.bfit:_home_VMS/8e6bea49-9c62-4e31-a3c9-0be09c2fcdbf/images/21f438fb-0c0e-4bdc-abb3-64a7e033cff6/c256a972-4328-4833-984d-fa8e62f76be8
 -O qcow2 -o compat=1.1 
/rhev/data-center/mnt/nas1.bfit:_home_VMS/8e6bea49-9c62-4e31-a3c9-0be09c2fcdbf/images/5a90515c-066d-43fb-9313-5c7742f68146/ed6dc60d-1d6f-48b6-aa6e-0e7fb1ad96b9



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NNJOA4CV7TMYRJGSTOVMJYTCVI2R3Q6A/


[ovirt-users] Re: how to forcibly remove dead node from Gluster 3 replica 1 arbiter setup?

2021-11-10 Thread Strahil Nikolov via Users
You have to specify the volume type.When you remove 1 brick from a replica 3 
volume - you are actually converting it to replica 2.
As you got 2 data bricks + 1 arbiter, then Just remove the arbiter brick and 
the missing node's brick:
gluster volume remove-brick VOL replica 1 node2:/brick node3:/brick force
What is the output of 'gluster volume info VOL' & 'gluster volume heal VOL info 
summary' ?

Best Regards,Strahil Nikolov
 
 
  On Wed, Nov 10, 2021 at 11:42, dhanaraj.ramesh--- via Users 
wrote:   the volume configured with Distributed Replicate volume with 7 bricks, 
when I try from GUI getting below error. 

Error while executing action Remove Gluster Volume Bricks: Volume remove brick 
force failed: rc=-1 out=() err=['Remove arbiter brick(s) only when converting 
from arbiter to replica 2 subvolume_']
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QXPOVQASYSANKXJN3Y2WK3TRNIHPPPNQ/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MZKGRLYYZRMBFLHZEWD3LGLORGY4DJLJ/


[ovirt-users] Upgraded to oVirt 4.4.9, still have vdsmd memory leak

2021-11-10 Thread Chris Adams
I have seen vdsmd leak memory for years (I've been running oVirt since
version 3.5), but never been able to nail it down.  I've upgraded a
cluster to oVirt 4.4.9 (reloading the hosts with CentOS 8-stream), and I
still see it happen.  One host in the cluster, which has been up 8 days,
has vdsmd with 4.3 GB resident memory.  On a couple of other hosts, it's
around half a gigabyte.

In the past, it seemed more likely to happen on the hosted engine hosts
and/or the SPM host... but the host with the 4.3 GB vdsmd is not either
of those.

I'm not sure what I do that would make my setup "special" compared to
others; I loaded a pretty minimal install of CentOS 8-stream, with the
only extra thing being I add the core parts of the Dell PowerEdge
OpenManage tools (so I can get remote SNMP hardware monitoring).

When I run "pmap $(pidof -x vdsmd)", the bulk of the RAM use is a single
anonymous block (which I'm guessing is just the python general memory
allocator).

I thought maybe the switch to CentOS 8 and python 3 might clear
something up, but obviously not.  Any ideas?
-- 
Chris Adams 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3PTE35WMIVGLV2W47YVQUHCVOI6LGIPM/


[ovirt-users] Re: how to forcibly remove dead node from Gluster 3 replica 1 arbiter setup?

2021-11-10 Thread dhanaraj.ramesh--- via Users
the volume configured with Distributed Replicate volume with 7 bricks, when I 
try from GUI getting below error. 

Error while executing action Remove Gluster Volume Bricks: Volume remove brick 
force failed: rc=-1 out=() err=['Remove arbiter brick(s) only when converting 
from arbiter to replica 2 subvolume_']
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QXPOVQASYSANKXJN3Y2WK3TRNIHPPPNQ/


[ovirt-users] Re: how to forcibly remove dead node from Gluster 3 replica 1 arbiter setup?

2021-11-10 Thread dhanaraj.ramesh--- via Users
when I try to remove all the node 2 bricks getting below error 

volume remove-brick commit force: failed: Bricks not from same subvol for 
replica

when try to remove node 2 just one brick getting below error

volume remove-brick commit force: failed: Remove brick incorrect brick count of 
1 for replica 3 

my setup running with 3 replica with 1 arbiter, and unable to remove the 2nd 
dead node... 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ORANUG4X64CXH37I6OGKFVCPRT6TXTY/