I have just checked the archive and it seems that the diagram is missing, so I'm adding an URL link to it:https://drive.google.com/file/d/1SiW21ASPXHRAEuE_jZ50R3FoO-NcnFqT/view?usp=sharingMy version is 3.12.15Best Regards,Strahil Nikolov___
Gluster-users
On Wed, Jan 23, 2019 at 1:59 AM Lindolfo Meira wrote:
> Dear all,
>
> I've been trying to benchmark a gluster file system using the MPIIO API of
> IOR. Almost all of the times I try to run the application with more than 6
> tasks performing I/O (mpirun -n N, for N > 6) I get the error: "writev:
Dear all,
I've been trying to benchmark a gluster file system using the MPIIO API of
IOR. Almost all of the times I try to run the application with more than 6
tasks performing I/O (mpirun -n N, for N > 6) I get the error: "writev:
Transport endpoint is not connected". And then each one of the
The Gluster community is pleased to announce the release of Gluster
4.1.7 and 5.3 (packages available at [1] & [2]).
Release notes for the release can be found at [3] & [4].
Major changes, features and limitations addressed in this release:
- This release fixes several security vulnerabilities
Hi Arnaud,
To analyse this behaviour I need log from slave and mount log also
from slave and not just snips please share complete log.
You can find logs from master - var/log/glusterfs/geo-replication/*
and for slave var/log/glusterfs/geo-replication-slave/* on slave node.
- Sunny
Hello Sunny,
Le Mon, Dec 17, 2018 at 04:19:04PM +0530, Sunny Kumar a écrit:
> Can you please share geo-replication log for master and mount log form slave.
Master log, when doing
root@prod01:/srv/www# touch coin2.txt && sleep 30 && mv coin2.txt bouh42.txt
root@prod01:/srv/www#
==> gsyncd.log
On Tue, 22 Jan 2019 at 11:42, Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:
>
>
> On Thu, Jan 10, 2019 at 1:56 PM Hu Bert wrote:
>
>> Hi,
>>
>> > > We ara also using 10TB disks, heal takes 7-8 days.
>> > > You can play with "cluster.shd-max-threads" setting. It is default 1 I
>> > >
On 01/22/2019 02:57 PM, Martin Toth wrote:
Hi all,
I just want to ensure myself how self-healing process exactly works, because I
need to turn one of my nodes down for maintenance.
I have replica 3 setup. Nothing complicated. 3 nodes, 1 volume, 1 brick per
node (ZFS pool). All nodes running
Hi all,
I just want to ensure myself how self-healing process exactly works, because I
need to turn one of my nodes down for maintenance.
I have replica 3 setup. Nothing complicated. 3 nodes, 1 volume, 1 brick per
node (ZFS pool). All nodes running Qemu VMs and disks of VMs are on Gluster
On Tue, Jan 22, 2019 at 1:50 PM Amudhan P wrote:
>
> Bitrot feature in Glusterfs is production ready or is it in beta phase?
>
>
We have not done extensive performance testing with BitRot, as it is known
to consume resources, and depending on the resources (CPU/Memory)
available, the speed would
Hi Shaik,
Can you please provide us complete glusterd and cmd_history logs from all
the nodes in the cluster? Also please paste output of the following
commands (from all nodes):
1. gluster --version
2. gluster volume info
3. gluster volume status
4. gluster peer status
5. ps -ax | grep
Hi David,
i haven't tested samba but glusterfs fuse, i have posted the results few
months ago, tests conducted using gluster 4.1.5:
*Options Reconfigured:*
client.event-threads 3
performance.cache-size 8GB
performance.io-thread-count 24
network.inode-lru-limit 1048576
Bitrot feature in Glusterfs is production ready or is it in beta phase?
On Mon, Jan 14, 2019 at 12:46 PM Amudhan P wrote:
> Resending mail.
>
> I have a total size of 50GB files per node and it has crossed 5 days but
> till now not completed bitrot signature process? yet 20GB+ files are
>
Hello Amar,
thank you for the advice. We already use nl-cache option and a bunch of
other settings. At the moment we try the samba-vfs-glusterfs plugin to
access a gluster volume via samba. The performance increase now.
Additionally we are looking for some performance measurements to compare
with.
14 matches
Mail list logo