fs-6.0, please take statedump of
client process (on any node) before and after the test, so we can
analyze the latency information of each translators.
With these information, I hope we will be in a better state to answer
the questions.
On Wed, Apr 10, 2019 at 3:45 PM Pascal Suter <ma
I may add to that that i have expanded linux filesystems (xfs and ext4)
both via LVM and some by adding disks to a hardware raid. from the OS
point of view it does not make a difference, the procedure once the
block device on which the filesytem resides is expanded is prettymuch
the same and so
Hi Cody
i'm still new to Gluster myself, so take my input with the necessary
skepticism:
if you care about performance (and it looks like you do), use zfs mirror
pairs and not raidz volumes. in my experience (outside of gluster),
raidz pools perform significantly worse than a hardware raid5
any advise is appreciated
cheers
Pascal
On 04.04.19 12:03, Pascal Suter wrote:
I just noticed i left the most important parameters out :)
here's the write command with filesize and recordsize in it as well :)
./iozone -i 0 -t 1 -F /mnt/gluster/storage/thread1 -+n -c -C -e -I -w
-+S 0 -s
also i think you should be able to get better performance if you use
vfs_glusterfs [1]. From what I understand it's similar to samba what
nfs-ganesha does for nfs, it uses the libgfapi directly rather than
sharing a fuse-mount
[1] https://www.samba.org/samba/docs/current/man-html/vfs_glusterfs
doing wrong are very welcome.
i'm using gluster 6.0 by the way.
regards
Pascal
On 03.04.19 12:28, Pascal Suter wrote:
Hi all
I am currently testing gluster on a single server. I have three
bricks, each a hardware RAID6 volume with thin provisioned LVM that
was aligned to the RAID an
Hi all
I am currently testing gluster on a single server. I have three bricks,
each a hardware RAID6 volume with thin provisioned LVM that was aligned
to the RAID and then formatted with xfs.
i've created a distributed volume so that entire files get distributed
across my three bricks.
fir