Hi Darrel,
Thanks again for your advice, I tried to take yesterday off and just not
think about it, back at it again today. Still no real progress, however my
colleague upgraded our version to 3.13 yesterday, this has broken NFS and
caused some other issues for us now. It did add the 'gluster
Hi,
Thank you for the update, sorry for the delay.
I did some more tests, but couldn't see the behaviour of spiked network
bandwidth usage when quick-read is on. After upgrading, have you remounted
the clients? As in the fix will not be effective until the process is
restarted.
If you have
I was one of the folk who wanted a NA/EMEA scheduled meeting, and I’m going to
have to miss it due to some real life issues (clogged sewer I’m going to have
to be dealing with at the time). Apologies, I’ll work on making the next one.
-Darrell
> On Apr 22, 2019, at 4:20 PM, FNU Raghavendra
Thanks for the info Karli,
I wasn’t aware ZFS Dedup was such a dog. I guess I’ll leave that off. My data
get’s 3.5:1 savings on compression alone. I was aware of stripped sets. I will
be doing 6x Striped sets across 12x disks.
On top of this design I’m going to try and test Intel Optane DIMM
Hello Gluster Users,
I am hoping someone can help me with resolving an ongoing issue I've been
having, I'm new to mailing lists so forgive me if I have gotten anything
wrong. We have noticed our performance deteriorating over the last few
weeks, easily measured by trying to do an ls on one of our
Hi Patrick,
Did this start only after the upgrade?
How do you determine which brick process to kill?
Are there a lot of files to be healed on the volume?
Can you provide a tcpdump of the slow listing from a separate test client
mount ?
1. Mount the gluster volume on a different mount point