[ovirt-users] Re: [Ext.] Re: Re: Storage Design Question - GlusterFS

2022-04-07 Thread Strahil Nikolov via Users
This one shows if a setting differs from the defaults. As I don't know the 
default (you can see it with gluster volume set help), you can check via 
gluster volume get  cluster.lookup-optimize.
best Regards,Strahil Nikolov
 
 
  On Wed, Apr 6, 2022 at 16:50, Mohamed 
Roushdy wrote:   
Well, I think it’s not configured. Here’s the config:
 
  
 
Volume Name: data
 
Type: Replicate
 
Volume ID: 06cac4b9-3d6a-410b-93e6-4f876bbfcdd1
 
Status: Started
 
Snapshot Count: 0
 
Number of Bricks: 1 x 3 = 3
 
Transport-type: tcp
 
Bricks:
 
Brick1: server1:/gluster/data/brick1
 
Brick2: server2:/gluster/data/brick1
 
Brick3: server3:/gluster/data/brick1
 
Options Reconfigured:
 
features.shard-block-size: 512MB
 
storage.owner-gid: 36
 
storage.owner-uid: 36
 
user.cifs: off
 
features.shard: on
 
cluster.shd-wait-qlength: 1
 
cluster.shd-max-threads: 8
 
cluster.locking-scheme: granular
 
cluster.data-self-heal-algorithm: full
 
cluster.server-quorum-type: server
 
cluster.quorum-type: auto
 
cluster.eager-lock: enable
 
network.remote-dio: enable
 
performance.low-prio-threads: 32
 
performance.stat-prefetch: off
 
performance.io-cache: off
 
performance.read-ahead: off
 
performance.quick-read: off
 
transport.address-family: inet
 
performance.readdir-ahead: on
 
nfs.disable: on
 
  
 
From: Strahil Nikolov 
Sent: Wednesday, April 6, 2022 1:42 PM
To: Mohamed Roushdy ; users@ovirt.org
Subject: [Ext.] Re: [ovirt-users] Re: Storage Design Question - GlusterFS
 
  
 

 
Can you check of the volume had 'cluster.lookup-optimize' enabled (on)?
 
Most probably it caused your problems .
 
  
 
For details check [1] [2]
 
  
 
Best Regards,
 
Strahil Nikolov
 
[1] https://access.redhat.com/solutions/5896761
 
  
 
[2] 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/BEYIM56H2Q3PVFOHKMC5LSL5RS4VIQL4/
 
  
 
  
 

On Wed, Apr 6, 2022 at 10:42,mohamedrous...@peopleintouch.com
 
 wrote:
 
Thanks a lot for your answer. The thing is, we had data lose disaster last 
month, and that's why we've decided to deploy a cmmpletely new environment for 
taht pirpose, and let me tell you what happened. We are using Ovirt 4.1 with 
Gluster 3.8 for our production, and the cluster contains five hosts, three of 
them are storage servers. I've added two more hosts, and manually via CLI (as I 
couldn't do this via GUI) created a new brick on one of the newely added 
nodes,and expanded the volume with the parameter "replica = 4  as it was 
refused to expand without setting the replica count. The action was successful 
for both the engine vol, and the data vol, however, few minutes later we've 
found that the disk sizes (images) of the existing VMs got shrunk and lost all 
of the data, and we had to recover from backup.. We can't afford another 
downtime (another disaster I mean), that's why I'm so maticulous about what to 
do next, so, is there a good guide around for the architecure and sizing?
 


Thank you,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4LS3BEUHYFS2B5Q2AC4HIP4YX4AT5AZ/
 
  
 
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LD3H5TUD75MGUY4GJH4GPKN4TZ65AZ5K/


[ovirt-users] Re: [Ext.] Re: Re: Storage Design Question - GlusterFS

2022-04-06 Thread Mohamed Roushdy
Well, I think it’s not configured. Here’s the config:

Volume Name: data
Type: Replicate
Volume ID: 06cac4b9-3d6a-410b-93e6-4f876bbfcdd1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: server1:/gluster/data/brick1
Brick2: server2:/gluster/data/brick1
Brick3: server3:/gluster/data/brick1
Options Reconfigured:
features.shard-block-size: 512MB
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on

From: Strahil Nikolov 
Sent: Wednesday, April 6, 2022 1:42 PM
To: Mohamed Roushdy ; users@ovirt.org
Subject: [Ext.] Re: [ovirt-users] Re: Storage Design Question - GlusterFS

[CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe]
Can you check of the volume had 'cluster.lookup-optimize' enabled (on)?
Most probably it caused your problems .

For details check [1] [2]

Best Regards,
Strahil Nikolov
[1] https://access.redhat.com/solutions/5896761

[2] 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/BEYIM56H2Q3PVFOHKMC5LSL5RS4VIQL4/


On Wed, Apr 6, 2022 at 10:42, 
mohamedrous...@peopleintouch.com
mailto:mohamedrous...@peopleintouch.com>> 
wrote:
Thanks a lot for your answer. The thing is, we had data lose disaster last 
month, and that's why we've decided to deploy a cmmpletely new environment for 
taht pirpose, and let me tell you what happened. We are using Ovirt 4.1 with 
Gluster 3.8 for our production, and the cluster contains five hosts, three of 
them are storage servers. I've added two more hosts, and manually via CLI (as I 
couldn't do this via GUI) created a new brick on one of the newely added 
nodes,and expanded the volume with the parameter "replica = 4  as it was 
refused to expand without setting the replica count. The action was successful 
for both the engine vol, and the data vol, however, few minutes later we've 
found that the disk sizes (images) of the existing VMs got shrunk and lost all 
of the data, and we had to recover from backup.. We can't afford another 
downtime (another disaster I mean), that's why I'm so maticulous about what to 
do next, so, is there a good guide around for the architecure and sizing?


Thank you,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4LS3BEUHYFS2B5Q2AC4HIP4YX4AT5AZ/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OBGDMGFP4I2AWR2R7BMNCPHKRSYCACXR/