Re: [Gluster-users] glusterfs on ovirt 4.3 problem after multiple power outages.

2022-10-19 Thread Γιώργος Βασιλόπουλος

I have allready done this it didn't seem to help

could reseting the arbiter brick be a solution?

On 10/19/22 01:40, Strahil Nikolov wrote:

Usually, I would run a full heal and check if it improves the situation:

gluster volume heal  full

Best Regards,
Strahil Nikolov


On Tue, Oct 18, 2022 at 14:01, Γιώργος Βασιλόπουλος
 wrote:
Hello I am seeking consoultation regarding a problem with files not
healing after multiple (3) power outages on the servers.


The configuration is like this :

There are 3 servers (g1,g2,g3) with 3 volumes (volume1, volume2,
volume3) with replica 2 + arbiter.

Glusterfs is   6.10.

Volume 1 and volume2 are ok on Volume3 on there are about 12403
healing
entries who are not healing and some virtual drives on ovirt vm
are not
starting and I cannot copy them either.

For volume 3 data bricks are on g3 and g1 and arbiter brick is on g2

There are .prob-uuid-something files which are identical on the 2
servers (g1,g3) with the data bricks of volume3 . On g2 (arbiter
brick
there are no such files.)

I have stopped the volume unmounted and runned xfs_repair on all
bricks,
remounted the bricks and started the volume. it did not fix the
problem

Is there anything I can do to fix the problem ?








Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] glusterfs on ovirt 4.3 problem after multiple power outages.

2022-10-18 Thread Γιώργος Βασιλόπουλος
Hello I am seeking consoultation regarding a problem with files not 
healing after multiple (3) power outages on the servers.



The configuration is like this :

There are 3 servers (g1,g2,g3) with 3 volumes (volume1, volume2, 
volume3) with replica 2 + arbiter.


Glusterfs is   6.10.

Volume 1 and volume2 are ok on Volume3 on there are about 12403 healing 
entries who are not healing and some virtual drives on ovirt vm are not 
starting and I cannot copy them either.


For volume 3 data bricks are on g3 and g1 and arbiter brick is on g2

There are .prob-uuid-something files which are identical on the 2 
servers (g1,g3) with the data bricks of volume3 . On g2 (arbiter brick 
there are no such files.)


I have stopped the volume unmounted and runned xfs_repair on all bricks, 
remounted the bricks and started the volume. it did not fix the problem


Is there anything I can do to fix the problem ?








Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] trying to figure out the best solution for vm and email storage

2018-07-25 Thread Γιώργος Βασιλόπουλος

Hello
I am trying to lay down my options regarding storage with glusterfs, vm 
storage and email storage


Hardware in my disposal is specific : I have 9 servers for running vm's 
under ovirt 4.2 and 3 servers for storage


The 3 storage machines are similar and each have 2xE5-2640v3 cpus 128GB 
RAM and 2x10G ethernet
each storage server has inside 2x300gb 10k drives which I intent to use 
as os install and maybe a litle volume for isos on nfs
also present are 6x200GB SSD drives which I think of using as tiering 
volume(s)
And the main storage is on an external JBOD box with 12x4TB drives 
connected via SAS to the server with RAID controller capable of varius 
raid levels.


So what I'm thinking is that I will create 3 replica 3 arbiter 1 volumes 
(high availability is the no 1 requirement) in a cyclic fashion, 2 data 
bricks and one arbiter on each storage server
I will implement raid6 with one spare drive on each server (we are in an 
island and getting a disk replacement can take days occasionaly) which 
will give me about 36T of usable storage per server.
So regarding the vm storage I am thinking that 2 volumes with 17TB each 
and an arbiter of 1TB.
What messes things up is that I was required to put the email storage in 
this installation. Our email is pretty big and busy with about 5 
users curently at 13T of storage.
Currently it runs on a few vm's and uses storage from nfs given from 
another vm. It is runs postfix/dovecot and right now a single big vm 
does  mailbox delivery but this reaches it's limits. Mail storage now is 
on a EMC VNX5500

But it will be moved to glusterfs for various reasons.

I would like some advise regarding the email storage. I think my options 
are


1a. use a VM as NFS give it a huge disk (raw image on gluster vm 
optimized) and be done with it
1b use a VM as NFS give it a 2 or 3 disks unified under lvm vg->lv (raw 
images on gluster vm optimized) and maybe take some advantage of using 
2-3 io-threads in o virt to write to 2-3 disks simultaneously. Will this 
give extra performance ?


2.Give gluster as nfs straight to dovecot but I wonder if this will have 
performance drawback since it will be fuse mounted. I am also worried 
about the arbiter volume since there will be thousands of small files, 
practically arbiter

will probably have to be as large as the data bricks or half that size

3. Give gluster as glusterfs mount point which I think will have about 
the same issues as 2.


I have read about problems with dovecot indexes and glusterfs. Is this 
still an issue? or is it a problem that only shows when there is no 
dovecot director.
Personaly I am inclined on using solution 1 because I think that arbiter 
volumes will be smaller (Am I right?) though it may have some overhead 
regarding nfs on the vm. On the other hand this solution will use libgfapi

which might balance things a bit.
Will it help if in such a case use small (16mb) shard size and tiering ?

I'm afraid I have it a bit mixed up in my mind and I could really use 
some help.




--
Βασιλόπουλος Γιώργος
Ηλεκτρολόγος Μηχανικός Τ.Ε.
Διαχειριστής Υπολ. Συστημάτων

Πανεπιστήμιο Κρήτης
Κ.Υ.Υ.Τ.Π.Ε.
Τμήμα Επικοινωνιών και Δικτύων
Βούτες Ηρακλείου 70013
Τηλ   : 2810393310
email : g.vasilopou...@uoc.gr
http://www.ucnet.uoc.gr

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gluster performance and new implementation

2018-07-23 Thread Γιώργος Βασιλόπουλος

Hello

I have set up an expirimental gluster replica 3 arbiter 1 volume for ovirt.

Network between gluster servers is 2x1G (mode4 bonding) and network on 
ovirt is 1G


I'm observing write performance is at about 30-35Mbytes /sec. Is this 
normal?


I was expecting about 45-55MB/sec on write given that write speed should 
be network throughput/2.


Is this expected given the fact that there is an arbiter volume in place ?

Is network throughput/3 like a true replica 3 volume when there is an 
arbiter brick ?


It seems that small or large files have little impact on performance 
currently.


My volume settings are as follow

Volume Name: gv01
Type: Replicate
Volume ID: 3396f285-ba1e-4360-94a9-5cc65ede62c9
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1.datacenter.uoc.gr:/gluster_bricks/brick00
Brick2: gluster2.datacenter.uoc.gr:/gluster_bricks/brick00
Brick3: gluster3.datacenter.uoc.gr:/gluster_bricks/brick00 (arbiter)
Options Reconfigured:
server.allow-insecure: on
storage.owner-gid: 36
storage.owner-uid: 36
auth.allow: xxx.xxx.xxx.*,xxx.xxx.xxx.*
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: on
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
performance.cache-size: 192MB
performance.write-behind-window-size: 524288
features.shard-block-size: 64MB
cluster.eager-lock: enable
diagnostics.brick-log-level: WARNING
diagnostics.client-log-level: WARNING
performance.cache-refresh-timeout: 4
performance.strict-write-ordering: off
performance.flush-behind: off
network.inode-lru-limit: 1
performance.md-cache-timeout: 600
performance.cache-invalidation: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
client.event-threads: 6
server.event-threads: 6
performance.stat-prefetch: on
performance.parallel-readdir: false
cluster.use-compound-fops: true
cluster.readdir-optimize: true
cluster.lookup-optimize: true
cluster.self-heal-daemon: enable
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
cluster.server-quorum-ratio: 51


Regards

George Vasilopoulos

Systems administrator UCNET University of Crete

--
Βασιλόπουλος Γιώργος
Ηλεκτρολόγος Μηχανικός Τ.Ε.
Διαχειριστής Υπολ. Συστημάτων

Πανεπιστήμιο Κρήτης
Κ.Υ.Υ.Τ.Π.Ε.
Τμήμα Επικοινωνιών και Δικτύων
Βούτες Ηρακλείου 70013
Τηλ   : 2810393310
email : g.vasilopou...@uoc.gr
http://www.ucnet.uoc.gr

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users