Re: [ceph-users] cephfs performance issue MDSs report slow requests and osd memory usage

2019-09-24 Thread Robert LeBlanc
On Tue, Sep 24, 2019 at 4:33 AM Thomas <74cmo...@gmail.com> wrote: > > Hi, > > I'm experiencing the same issue with this setting in ceph.conf: > osd op queue = wpq > osd op queue cut off = high > > Furthermore I cannot read any old data in the relevant pool that is > serving

Re: [ceph-users] cephfs performance issue MDSs report slow requests and osd memory usage

2019-09-24 Thread Thomas
Hi, I'm experiencing the same issue with this setting in ceph.conf:     osd op queue = wpq     osd op queue cut off = high Furthermore I cannot read any old data in the relevant pool that is serving CephFS. However, I can write new data and read this new data. Regards Thomas Am

Re: [ceph-users] cephfs performance issue MDSs report slow requests and osd memory usage

2019-09-24 Thread Yoann Moulin
Hello, >> I have a Ceph Nautilus Cluster 14.2.1 for cephfs only on 40x 1.8T SAS disk >> (no SSD) in 20 servers. >> >> I often get "MDSs report slow requests" and plenty of "[WRN] 3 slow >> requests, 0 included below; oldest blocked for > 60281.199503 secs" >> >> After a few investigations, I

Re: [ceph-users] cephfs performance issue MDSs report slow requests and osd memory usage

2019-09-23 Thread Robert LeBlanc
On Thu, Sep 19, 2019 at 2:36 AM Yoann Moulin wrote: > > Hello, > > I have a Ceph Nautilus Cluster 14.2.1 for cephfs only on 40x 1.8T SAS disk > (no SSD) in 20 servers. > > > cluster: > > id: 778234df-5784-4021-b983-0ee1814891be > > health: HEALTH_WARN > > 2 MDSs report

[ceph-users] cephfs performance issue MDSs report slow requests and osd memory usage

2019-09-19 Thread Yoann Moulin
Hello, I have a Ceph Nautilus Cluster 14.2.1 for cephfs only on 40x 1.8T SAS disk (no SSD) in 20 servers. > cluster: > id: 778234df-5784-4021-b983-0ee1814891be > health: HEALTH_WARN > 2 MDSs report slow requests > > services: > mon: 3 daemons, quorum

[ceph-users] CephFS performance improved in 13.2.5?

2019-03-20 Thread Sergey Malinin
Hello, Yesterday I upgraded from 13.2.2 to 13.2.5 and so far I have only seen significant improvements in MDS operations. Needless to say I'm happy, but I didn't notice anything related in release notes. Am I missing something, possibly new configuration settings? Screenshots below:

Re: [ceph-users] CephFS performance vs. underlying storage

2019-01-30 Thread Marc Roos
that decreases your iops. -Original Message- From: Hector Martin [mailto:hec...@marcansoft.com] Sent: 30 January 2019 19:43 To: ceph-users@lists.ceph.com Subject: [ceph-users] CephFS performance vs. underlying storage Hi list, I'm experimentally running single-host CephFS as as replacement

[ceph-users] CephFS performance vs. underlying storage

2019-01-30 Thread Hector Martin
Hi list, I'm experimentally running single-host CephFS as as replacement for "traditional" filesystems. My setup is 8×8TB HDDs using dm-crypt, with CephFS on a 5+2 EC pool. All of the components are running on the same host (mon/osd/mds/kernel CephFS client). I've set the stripe_unit/object_size

Re: [ceph-users] cephfs performance degraded very fast

2019-01-22 Thread Yan, Zheng
On Tue, Jan 22, 2019 at 8:24 PM renjianxinlover wrote: > > hi, >at some time, as cache pressure or caps release failure, client apps mount > got stuck. >my use case is in kubernetes cluster and automatic kernel client mount in > nodes. >is anyone faced with same issue or has related

[ceph-users] cephfs performance degraded very fast

2019-01-22 Thread renjianxinlover
hi, at some time, as cache pressure or caps release failure, client apps mount got stuck. my use case is in kubernetes cluster and automatic kernel client mount in nodes. is anyone faced with same issue or has related solution? Brs___

Re: [ceph-users] CephFS performance.

2018-10-04 Thread Patrick Donnelly
On Thu, Oct 4, 2018 at 2:10 AM Ronny Aasen wrote: > in rbd there is a fancy striping solution, by using --stripe-unit and > --stripe-count. This would get more spindles running ; perhaps consider > using rbd instead of cephfs if it fits the workload. CephFS also supports custom striping via

Re: [ceph-users] CephFS performance.

2018-10-04 Thread Ronny Aasen
On 10/4/18 7:04 AM, jes...@krogh.cc wrote: Hi All. First thanks for the good discussion and strong answer's I've gotten so far. Current cluster setup is 4 x 10 x 12TB 7.2K RPM drives with all and 10GbitE and metadata on rotating drives - 3x replication - 256GB memory in OSD hosts and 32+

[ceph-users] CephFS performance.

2018-10-03 Thread jesper
Hi All. First thanks for the good discussion and strong answer's I've gotten so far. Current cluster setup is 4 x 10 x 12TB 7.2K RPM drives with all and 10GbitE and metadata on rotating drives - 3x replication - 256GB memory in OSD hosts and 32+ cores. Behind Perc with eachdiskraid0 and BBWC.

Re: [ceph-users] cephfs performance issue

2018-03-29 Thread Ouyang Xu
Hi David: That's works, thank you very much! Best regards, Steven On 2018年03月29日 18:30, David C wrote: Pretty sure you're getting stung by: http://tracker.ceph.com/issues/17563 Consider using an elrepo kernel, 4.14 works well for me. On Thu, 29 Mar 2018, 09:46 Dan van der Ster,

Re: [ceph-users] cephfs performance issue

2018-03-29 Thread David C
Pretty sure you're getting stung by: http://tracker.ceph.com/issues/17563 Consider using an elrepo kernel, 4.14 works well for me. On Thu, 29 Mar 2018, 09:46 Dan van der Ster, wrote: > On Thu, Mar 29, 2018 at 10:31 AM, Robert Sander >

Re: [ceph-users] cephfs performance issue

2018-03-29 Thread Dan van der Ster
On Thu, Mar 29, 2018 at 10:31 AM, Robert Sander wrote: > On 29.03.2018 09:50, ouyangxu wrote: > >> I'm using Ceph 12.2.4 with CentOS 7.4, and tring to use cephfs for >> MariaDB deployment, > > Don't do this. > As the old saying goes: If it hurts, stop doing it. Why

Re: [ceph-users] cephfs performance issue

2018-03-29 Thread Robert Sander
On 29.03.2018 09:50, ouyangxu wrote: > I'm using Ceph 12.2.4 with CentOS 7.4, and tring to use cephfs for > MariaDB deployment, Don't do this. As the old saying goes: If it hurts, stop doing it. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] cephfs performance issue

2018-03-29 Thread ouyangxu
Hi Ceph users: I'm using Ceph 12.2.4 with CentOS 7.4, and tring to use cephfs for MariaDB deployment, the configuration is default, but got very pool performance during creating tables, if I use the local file system, not this issue. Here is the sql scripts I used: [root@cmv01cn01]$ cat

Re: [ceph-users] CephFS Performance

2017-05-10 Thread Webert de Souza Lima
On Tue, May 9, 2017 at 9:07 PM, Brady Deetz wrote: > So with email, you're talking about lots of small reads and writes. In my > experience with dicom data (thousands of 20KB files per directory), cephfs > doesn't perform very well at all on platter drivers. I haven't

Re: [ceph-users] CephFS Performance

2017-05-09 Thread Brady Deetz
Readding list: So with email, you're talking about lots of small reads and writes. In my experience with dicom data (thousands of 20KB files per directory), cephfs doesn't perform very well at all on platter drivers. I haven't experimented with pure ssd configurations, so I can't comment on that.

Re: [ceph-users] CephFS Performance

2017-05-09 Thread Webert de Souza Lima
On Tue, May 9, 2017 at 4:40 PM, Brett Niver wrote: > What is your workload like? Do you have a single or multiple active > MDS ranks configured? User traffic is heavy. I can't really say in terms of mb/s or iops but it's an email server with 25k+ users, usually about 6k

Re: [ceph-users] CephFS Performance

2017-05-09 Thread Wido den Hollander
> Op 9 mei 2017 om 20:26 schreef Brady Deetz : > > > If I'm reading your cluster diagram correctly, I'm seeing a 1gbps > interconnect, presumably cat6. Due to the additional latency of performing > metadata operations, I could see cephfs performing at those speeds. Are you >

Re: [ceph-users] CephFS Performance

2017-05-09 Thread Brett Niver
What is your workload like? Do you have a single or multiple active MDS ranks configured? On Tue, May 9, 2017 at 3:10 PM, Webert de Souza Lima wrote: > That 1gbps link is the only option I have for those servers, unfortunately. > It's all dedicated server rentals from

Re: [ceph-users] CephFS Performance

2017-05-09 Thread Webert de Souza Lima
That 1gbps link is the only option I have for those servers, unfortunately. It's all dedicated server rentals from OVH. I don't have information regarding the internals of the vrack. So by what you said, I understand that one should expect a performance drop in comparison to ceph rbd using the

Re: [ceph-users] CephFS Performance

2017-05-09 Thread Brady Deetz
If I'm reading your cluster diagram correctly, I'm seeing a 1gbps interconnect, presumably cat6. Due to the additional latency of performing metadata operations, I could see cephfs performing at those speeds. Are you using jumbo frames? Also are you routing? If you're routing, the router will

[ceph-users] CephFS Performance

2017-05-09 Thread Webert de Souza Lima
Hello all, I'm been using cephfs for a while but never really evaluated its performance. As I put up a new ceph cluster, I though that I should run a benchmark to see if I'm going the right way. By the results I got, I see that RBD performs *a lot* better in comparison to cephfs. The cluster is

Re: [ceph-users] cephfs performance benchmark -- metadata intensive

2016-08-12 Thread John Spray
On Thu, Aug 11, 2016 at 1:24 PM, Brett Niver wrote: > Patrick and I had a related question yesterday, are we able to dynamically > vary cache size to artificially manipulate cache pressure? Yes -- at the top of MDCache::trim the max size is read straight out of g_conf so it

Re: [ceph-users] cephfs performance benchmark -- metadata intensive

2016-08-11 Thread Brett Niver
Patrick and I had a related question yesterday, are we able to dynamically vary cache size to artificially manipulate cache pressure? On Thu, Aug 11, 2016 at 6:07 AM, John Spray wrote: > On Thu, Aug 11, 2016 at 8:29 AM, Xiaoxi Chen > wrote: > > Hi , >

Re: [ceph-users] cephfs performance benchmark -- metadata intensive

2016-08-11 Thread John Spray
On Thu, Aug 11, 2016 at 8:29 AM, Xiaoxi Chen wrote: > Hi , > > > Here is the slide I shared yesterday on performance meeting. > Thanks and hoping for inputs. > > > http://www.slideshare.net/XiaoxiChen3/cephfs-jewel-mds-performance-benchmark These are definitely