Re: [Gluster-users] Performance issue, need guidance

2019-01-22 Thread Strahil
I have just checked the archive and it seems that the diagram is missing, so I'm adding an URL link to it:https://drive.google.com/file/d/1SiW21ASPXHRAEuE_jZ50R3FoO-NcnFqT/view?usp=sharingMy version is 3.12.15Best Regards,Strahil Nikolov___ Gluster-users

Re: [Gluster-users] writev: Transport endpoint is not connected

2019-01-22 Thread Raghavendra Gowdappa
On Wed, Jan 23, 2019 at 1:59 AM Lindolfo Meira wrote: > Dear all, > > I've been trying to benchmark a gluster file system using the MPIIO API of > IOR. Almost all of the times I try to run the application with more than 6 > tasks performing I/O (mpirun -n N, for N > 6) I get the error: "writev:

[Gluster-users] writev: Transport endpoint is not connected

2019-01-22 Thread Lindolfo Meira
Dear all, I've been trying to benchmark a gluster file system using the MPIIO API of IOR. Almost all of the times I try to run the application with more than 6 tasks performing I/O (mpirun -n N, for N > 6) I get the error: "writev: Transport endpoint is not connected". And then each one of the

[Gluster-users] Announcing Gluster release 5.3 and 4.1.7

2019-01-22 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster 4.1.7 and 5.3 (packages available at [1] & [2]). Release notes for the release can be found at [3] & [4]. Major changes, features and limitations addressed in this release: - This release fixes several security vulnerabilities

Re: [Gluster-users] File renaming not geo-replicated

2019-01-22 Thread Sunny Kumar
Hi Arnaud, To analyse this behaviour I need log from slave and mount log also from slave and not just snips please share complete log. You can find logs from master - var/log/glusterfs/geo-replication/* and for slave var/log/glusterfs/geo-replication-slave/* on slave node. - Sunny

Re: [Gluster-users] File renaming not geo-replicated

2019-01-22 Thread Arnaud Launay
Hello Sunny, Le Mon, Dec 17, 2018 at 04:19:04PM +0530, Sunny Kumar a écrit: > Can you please share geo-replication log for master and mount log form slave. Master log, when doing root@prod01:/srv/www# touch coin2.txt && sleep 30 && mv coin2.txt bouh42.txt root@prod01:/srv/www# ==> gsyncd.log

Re: [Gluster-users] usage of harddisks: each hdd a brick? raid?

2019-01-22 Thread Nithya Balachandran
On Tue, 22 Jan 2019 at 11:42, Amar Tumballi Suryanarayan < atumb...@redhat.com> wrote: > > > On Thu, Jan 10, 2019 at 1:56 PM Hu Bert wrote: > >> Hi, >> >> > > We ara also using 10TB disks, heal takes 7-8 days. >> > > You can play with "cluster.shd-max-threads" setting. It is default 1 I >> > >

Re: [Gluster-users] Self/Healing process after node maintenance

2019-01-22 Thread Ravishankar N
On 01/22/2019 02:57 PM, Martin Toth wrote: Hi all, I just want to ensure myself how self-healing process exactly works, because I need to turn one of my nodes down for maintenance. I have replica 3 setup. Nothing complicated. 3 nodes, 1 volume, 1 brick per node (ZFS pool). All nodes running

[Gluster-users] Self/Healing process after node maintenance

2019-01-22 Thread Martin Toth
Hi all, I just want to ensure myself how self-healing process exactly works, because I need to turn one of my nodes down for maintenance. I have replica 3 setup. Nothing complicated. 3 nodes, 1 volume, 1 brick per node (ZFS pool). All nodes running Qemu VMs and disks of VMs are on Gluster

Re: [Gluster-users] Increasing Bitrot speed glusterfs 4.1.6

2019-01-22 Thread Amar Tumballi Suryanarayan
On Tue, Jan 22, 2019 at 1:50 PM Amudhan P wrote: > > Bitrot feature in Glusterfs is production ready or is it in beta phase? > > We have not done extensive performance testing with BitRot, as it is known to consume resources, and depending on the resources (CPU/Memory) available, the speed would

Re: [Gluster-users] [Bugs] Bricks are going offline unable to recover with heal/start force commands

2019-01-22 Thread Sanju Rakonde
Hi Shaik, Can you please provide us complete glusterd and cmd_history logs from all the nodes in the cluster? Also please paste output of the following commands (from all nodes): 1. gluster --version 2. gluster volume info 3. gluster volume status 4. gluster peer status 5. ps -ax | grep

Re: [Gluster-users] [External] Re: Samba+Gluster: Performance measurements for small files

2019-01-22 Thread Davide Obbi
Hi David, i haven't tested samba but glusterfs fuse, i have posted the results few months ago, tests conducted using gluster 4.1.5: *Options Reconfigured:* client.event-threads 3 performance.cache-size 8GB performance.io-thread-count 24 network.inode-lru-limit 1048576

Re: [Gluster-users] Increasing Bitrot speed glusterfs 4.1.6

2019-01-22 Thread Amudhan P
Bitrot feature in Glusterfs is production ready or is it in beta phase? On Mon, Jan 14, 2019 at 12:46 PM Amudhan P wrote: > Resending mail. > > I have a total size of 50GB files per node and it has crossed 5 days but > till now not completed bitrot signature process? yet 20GB+ files are >

Re: [Gluster-users] Samba+Gluster: Performance measurements for small files

2019-01-22 Thread David Spisla
Hello Amar, thank you for the advice. We already use nl-cache option and a bunch of other settings. At the moment we try the samba-vfs-glusterfs plugin to access a gluster volume via samba. The performance increase now. Additionally we are looking for some performance measurements to compare with.