On Tue, Mar 20, 2018 at 9:45 AM, Sam McLeod
wrote:
> Excellent description, thank you.
>
> With performance.write-behind-trickling-writes ON (default):
>
> ## 4k randwrite
>
> # fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test
> --filename=test --bs=4k
Excellent description, thank you.
With performance.write-behind-trickling-writes ON (default):
## 4k randwrite
# fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test
--filename=test --bs=4k --iodepth=32 --size=256MB --readwrite=randwrite
test: (g=0): rw=randwrite, bs=(R)
On Tue, Mar 20, 2018 at 8:57 AM, Sam McLeod
wrote:
> Hi Raghavendra,
>
>
> On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa
> wrote:
>
> Aggregating large number of small writes by write-behind into large writes
> has been merged on master:
>
Hi Raghavendra,
> On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa wrote:
>
> Aggregating large number of small writes by write-behind into large writes
> has been merged on master:
> https://github.com/gluster/glusterfs/issues/364
>
On Tue, Mar 20, 2018 at 1:55 AM, TomK wrote:
> On 3/19/2018 10:52 AM, Rik Theys wrote:
>
>> Hi,
>>
>> On 03/19/2018 03:42 PM, TomK wrote:
>>
>>> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
>>> Removing NFS or NFS Ganesha from the equation, not very impressed on my
>>> own
Howdy all,
Sorry in Australia so most of your replies came in over night for me.
Note: At the end of this reply is a listing of all our volume settings (gluster
get volname all).
Note 2: I really wish Gluster used Discourse for this kind of community
troubleshooting an analysis, using a
On 3/19/2018 10:52 AM, Rik Theys wrote:
Hi,
On 03/19/2018 03:42 PM, TomK wrote:
On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
Removing NFS or NFS Ganesha from the equation, not very impressed on my
own setup either. For the writes it's doing, that's alot of CPU usage
in top. Seems
Hi,
On 03/19/2018 03:42 PM, TomK wrote:
> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
> Removing NFS or NFS Ganesha from the equation, not very impressed on my
> own setup either. For the writes it's doing, that's alot of CPU usage
> in top. Seems bottle-necked via a single execution core
: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Rik Theys
Sent: Monday, March 19, 2018 10:38 AM
To: gluster-users@gluster.org; mailingli...@smcleod.net
Subject: Re: [Gluster-users] Gluster very poor performance when copying small
files (1x (2+1) = 3, SSD
...@gluster.org] On Behalf Of Rik Theys
Sent: Monday, March 19, 2018 10:38 AM
To: gluster-users@gluster.org; mailingli...@smcleod.net
Subject: Re: [Gluster-users] Gluster very poor performance when copying small
files (1x (2+1) = 3, SSD)
Hi,
I've done some similar tests and experience similar performance
Hi,
I've done some similar tests and experience similar performance issues
(see my 'gluster for home directories?' thread on the list).
If I read your mail correctly, you are comparing an NFS mount of the
brick disk against a gluster mount (using the fuse client)?
Which options do you have set
Hi Tom,
Thanks for your reply.
1. Yes XFS is on a LUKs LV (see below).
2. Yes, I prefer FIO but each Gluster host gets between 50-100K 4K random IOP/s
both write and read to disk.
3. Yes, we actually use 2x 10Gbit DACs in LACP, but we get full 10Gbit speeds
(and very low latency thanks to the
On 3/18/2018 6:13 PM, Sam McLeod wrote:
Even your NFS transfers are 12.5 or so MB per second or less.
1) Did you use fdisk and LVM under that XFS filesystem?
2) Did you benchmark the XFS with something like bonnie++? (There's
probably newer benchmark suites now.)
3) Did you benchmark your
Howdy all,
We're experiencing terrible small file performance when copying or moving files
on gluster clients.
In the example below, Gluster is taking 6mins~ to copy 128MB / 21,000 files
sideways on a client, doing the same thing on NFS (which I know is a totally
different solution etc. etc.)
14 matches
Mail list logo