:59 PM
To: Gencer W. Genç
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Yet another performance tuning for CephFS
On 07/18/17 14:10, Gencer W. Genç wrote:
>>> Are you sure? Your config didn't show this.
> Yes. I have dedicated 10GbE network between ceph nodes. Each ceph
.malo...@brockmann-consult.de]
> Sent: Tuesday, July 18, 2017 2:47 PM
> To: gen...@gencgiyen.com
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Yet another performance tuning for CephFS
>
> On 07/17/17 22:49, gen...@gencgiyen.com wrote:
>> I have a seperate 10GbE network for
SD's. Each node has 10x3TB SATA Hard
> Disk Drives (HDD).
>
>
> -Gencer.
>
>
> -Original Message-
> From: Peter Maloney [mailto:peter.malo...@brockmann-consult.de]
> Sent: Tuesday, July 18, 2017 2:47 PM
> To: gen...@gencgiyen.com
> Cc: ceph-users@lists.
h-users@lists.ceph.com
Subject: Re: [ceph-users] Yet another performance tuning for CephFS
On 07/17/17 22:49, gen...@gencgiyen.com wrote:
> I have a seperate 10GbE network for ceph and another for public.
>
Are you sure? Your config didn't show this.
> No they are not NVMe, unfortunately.
On 07/17/17 22:49, gen...@gencgiyen.com wrote:
> I have a seperate 10GbE network for ceph and another for public.
>
Are you sure? Your config didn't show this.
> No they are not NVMe, unfortunately.
>
What kind of devices are they? did you do the journal test?
http://www.sebastien-han.fr/blog/2014
com]
Sent: Monday, July 17, 2017 11:21 PM
To: gen...@gencgiyen.com
Cc: Ceph Users
Subject: Re: [ceph-users] Yet another performance tuning for CephFS
On Mon, Jul 17, 2017 at 1:08 PM, wrote:
> But lets try another. Lets say i have a file in my server which is
> 5GB. If i do this:
>
>
30575M
total_avail 55857G
total_space 55887G
From: David Turner [mailto:drakonst...@gmail.com]
Sent: Tuesday, July 18, 2017 2:31 AM
To: Gencer Genç ; Patrick Donnelly
Cc: Ceph Users
Subject: Re: [ceph-users] Yet another performance tuning for CephFS
What are
--
> From: Patrick Donnelly [mailto:pdonn...@redhat.com]
> Sent: 17 Temmuz 2017 Pazartesi 23:21
> To: gen...@gencgiyen.com
> Cc: Ceph Users
> Subject: Re: [ceph-users] Yet another performance tuning for CephFS
>
> On Mon, Jul 17, 2017 at 1:08 PM, wrote:
> > But lets try anot
0mb/s? What
prevents it im really wonder this.
Gencer.
-Original Message-
From: Patrick Donnelly [mailto:pdonn...@redhat.com]
Sent: 17 Temmuz 2017 Pazartesi 23:21
To: gen...@gencgiyen.com
Cc: Ceph Users
Subject: Re: [ceph-users] Yet another performance tuning for CephFS
On Mon, Jul 17, 20
I have a seperate 10GbE network for ceph and another for public.
No they are not NVMe, unfortunately.
Do you know any test command that i can try to see if this is the max.
Read speed from rsync?
Because I tried one thing a few minutes ago. I opened 4 ssh channel and
run rsync command and co
You should have a separate public and cluster network. And journal or
wal/db performance is important... are the devices fast NVMe?
On 07/17/17 21:31, gen...@gencgiyen.com wrote:
>
> Hi,
>
>
>
> I located and applied almost every different tuning setting/config
> over the internet. I couldn’t m
On Mon, Jul 17, 2017 at 1:08 PM, wrote:
> But lets try another. Lets say i have a file in my server which is 5GB. If i
> do this:
>
> $ rsync ./bigfile /mnt/cephfs/targetfile --progress
>
> Then i see max. 200 mb/s. I think it is still slow :/ Is this an expected?
Perhaps that is the bandwidth l
Hi Patrick.
Thank you for prompt response.
I added ceph.conf file but i think you missed it.
These are the configs i tuned: (also i disabled debug logs in global
section). Correct me if i understand you wrongly on this.
Btw before i gave you config i want to answer on sync io. Yes if i
remo
Hi Gencer,
On Mon, Jul 17, 2017 at 12:31 PM, wrote:
> I located and applied almost every different tuning setting/config over the
> internet. I couldn’t manage to speed up my speed one byte further. It is
> always same speed whatever I do.
I believe you're frustrated but this type of informatio
Hi,
I located and applied almost every different tuning setting/config over the
internet. I couldn't manage to speed up my speed one byte further. It is
always same speed whatever I do.
I was on jewel, now I tried BlueStore on Luminous. Still exact same speed I
gain from cephfs.
It does
15 matches
Mail list logo