Re: [ceph-users] CephFS Performance

2017-05-09 Thread Brady Deetz
Readding list: So with email, you're talking about lots of small reads and writes. In my experience with dicom data (thousands of 20KB files per directory), cephfs doesn't perform very well at all on platter drivers. I haven't experimented with pure ssd configurations, so I can't comment on that.

Re: [ceph-users] CephFS Performance

2017-05-09 Thread Webert de Souza Lima
On Tue, May 9, 2017 at 4:40 PM, Brett Niver wrote: > What is your workload like? Do you have a single or multiple active > MDS ranks configured? User traffic is heavy. I can't really say in terms of mb/s or iops but it's an email server with 25k+ users, usually about 6k

Re: [ceph-users] CephFS Performance

2017-05-09 Thread Wido den Hollander
> Op 9 mei 2017 om 20:26 schreef Brady Deetz : > > > If I'm reading your cluster diagram correctly, I'm seeing a 1gbps > interconnect, presumably cat6. Due to the additional latency of performing > metadata operations, I could see cephfs performing at those speeds. Are you >

Re: [ceph-users] CephFS Performance

2017-05-09 Thread Brett Niver
What is your workload like? Do you have a single or multiple active MDS ranks configured? On Tue, May 9, 2017 at 3:10 PM, Webert de Souza Lima wrote: > That 1gbps link is the only option I have for those servers, unfortunately. > It's all dedicated server rentals from

Re: [ceph-users] CephFS Performance

2017-05-09 Thread Webert de Souza Lima
That 1gbps link is the only option I have for those servers, unfortunately. It's all dedicated server rentals from OVH. I don't have information regarding the internals of the vrack. So by what you said, I understand that one should expect a performance drop in comparison to ceph rbd using the

Re: [ceph-users] CephFS Performance

2017-05-09 Thread Brady Deetz
If I'm reading your cluster diagram correctly, I'm seeing a 1gbps interconnect, presumably cat6. Due to the additional latency of performing metadata operations, I could see cephfs performing at those speeds. Are you using jumbo frames? Also are you routing? If you're routing, the router will

[ceph-users] CephFS Performance

2017-05-09 Thread Webert de Souza Lima
Hello all, I'm been using cephfs for a while but never really evaluated its performance. As I put up a new ceph cluster, I though that I should run a benchmark to see if I'm going the right way. By the results I got, I see that RBD performs *a lot* better in comparison to cephfs. The cluster is

[ceph-users] Ceph MDS daemonperf

2017-05-09 Thread Webert de Souza Lima
Hi, by issuing `ceph daemonperf mds.x` I see the following columns: -mds-- --mds_server-- ---objecter--- -mds_cache- ---mds_log rlat inos caps|hsr hcs hcr |writ read actv|recd recy stry purg|segs evts subm| 0 95 41 | 000 | 000 | 00 250 |

Re: [ceph-users] Performance after adding a node

2017-05-09 Thread David Turner
You can modify the settings while a node is being added. It's actually a good time to do it. Note that when you decrease the settings, it doesn't stop the current PGs from backfilling, it just stops the next ones from starting until there is a slot open on the OSD according to the new setting.

Re: [ceph-users] Performance after adding a node

2017-05-09 Thread Daniel Davidson
Thanks, I had a feeling one of these was too high. Once the current node finishes I will try again with your recommended settings. Dan On 05/08/2017 05:03 PM, David Turner wrote: WOW!!! Those are some awfully high backfilling settings you have there. They are 100% the reason that your

[ceph-users] Antw: Re: Performance after adding a node

2017-05-09 Thread Steffen Weißgerber
Hi, checking the actual value for osd_max_backfills at our cluster (0.94.9) I also made a config diff of the osd configuration (ceph daemon osd.0 config diff) and wondered why there's a displayed default of 10 which differs from the documented default at