On Tue, Sep 24, 2019 at 4:33 AM Thomas <74cmo...@gmail.com> wrote:
>
> Hi,
>
> I'm experiencing the same issue with this setting in ceph.conf:
> osd op queue = wpq
> osd op queue cut off = high
>
> Furthermore I cannot read any old data in the relevant pool that is
> serving
Hi,
I'm experiencing the same issue with this setting in ceph.conf:
osd op queue = wpq
osd op queue cut off = high
Furthermore I cannot read any old data in the relevant pool that is
serving CephFS.
However, I can write new data and read this new data.
Regards
Thomas
Am
Hello,
>> I have a Ceph Nautilus Cluster 14.2.1 for cephfs only on 40x 1.8T SAS disk
>> (no SSD) in 20 servers.
>>
>> I often get "MDSs report slow requests" and plenty of "[WRN] 3 slow
>> requests, 0 included below; oldest blocked for > 60281.199503 secs"
>>
>> After a few investigations, I
On Thu, Sep 19, 2019 at 2:36 AM Yoann Moulin wrote:
>
> Hello,
>
> I have a Ceph Nautilus Cluster 14.2.1 for cephfs only on 40x 1.8T SAS disk
> (no SSD) in 20 servers.
>
> > cluster:
> > id: 778234df-5784-4021-b983-0ee1814891be
> > health: HEALTH_WARN
> > 2 MDSs report
Hello,
I have a Ceph Nautilus Cluster 14.2.1 for cephfs only on 40x 1.8T SAS disk (no
SSD) in 20 servers.
> cluster:
> id: 778234df-5784-4021-b983-0ee1814891be
> health: HEALTH_WARN
> 2 MDSs report slow requests
>
> services:
> mon: 3 daemons, quorum
Hello,
Yesterday I upgraded from 13.2.2 to 13.2.5 and so far I have only seen
significant improvements in MDS operations. Needless to say I'm happy, but I
didn't notice anything related in release notes. Am I missing something,
possibly new configuration settings?
Screenshots below:
that decreases your iops.
-Original Message-
From: Hector Martin [mailto:hec...@marcansoft.com]
Sent: 30 January 2019 19:43
To: ceph-users@lists.ceph.com
Subject: [ceph-users] CephFS performance vs. underlying storage
Hi list,
I'm experimentally running single-host CephFS as as replacement
Hi list,
I'm experimentally running single-host CephFS as as replacement for
"traditional" filesystems.
My setup is 8×8TB HDDs using dm-crypt, with CephFS on a 5+2 EC pool. All
of the components are running on the same host (mon/osd/mds/kernel
CephFS client). I've set the stripe_unit/object_size
On Tue, Jan 22, 2019 at 8:24 PM renjianxinlover wrote:
>
> hi,
>at some time, as cache pressure or caps release failure, client apps mount
> got stuck.
>my use case is in kubernetes cluster and automatic kernel client mount in
> nodes.
>is anyone faced with same issue or has related
hi,
at some time, as cache pressure or caps release failure, client apps mount
got stuck.
my use case is in kubernetes cluster and automatic kernel client mount in
nodes.
is anyone faced with same issue or has related solution?
Brs___
On Thu, Oct 4, 2018 at 2:10 AM Ronny Aasen wrote:
> in rbd there is a fancy striping solution, by using --stripe-unit and
> --stripe-count. This would get more spindles running ; perhaps consider
> using rbd instead of cephfs if it fits the workload.
CephFS also supports custom striping via
On 10/4/18 7:04 AM, jes...@krogh.cc wrote:
Hi All.
First thanks for the good discussion and strong answer's I've gotten so far.
Current cluster setup is 4 x 10 x 12TB 7.2K RPM drives with all and
10GbitE and metadata on rotating drives - 3x replication - 256GB memory in
OSD hosts and 32+
Hi All.
First thanks for the good discussion and strong answer's I've gotten so far.
Current cluster setup is 4 x 10 x 12TB 7.2K RPM drives with all and
10GbitE and metadata on rotating drives - 3x replication - 256GB memory in
OSD hosts and 32+ cores. Behind Perc with eachdiskraid0 and BBWC.
Hi David:
That's works, thank you very much!
Best regards,
Steven
On 2018年03月29日 18:30, David C wrote:
Pretty sure you're getting stung by: http://tracker.ceph.com/issues/17563
Consider using an elrepo kernel, 4.14 works well for me.
On Thu, 29 Mar 2018, 09:46 Dan van der Ster,
Pretty sure you're getting stung by: http://tracker.ceph.com/issues/17563
Consider using an elrepo kernel, 4.14 works well for me.
On Thu, 29 Mar 2018, 09:46 Dan van der Ster, wrote:
> On Thu, Mar 29, 2018 at 10:31 AM, Robert Sander
>
On Thu, Mar 29, 2018 at 10:31 AM, Robert Sander
wrote:
> On 29.03.2018 09:50, ouyangxu wrote:
>
>> I'm using Ceph 12.2.4 with CentOS 7.4, and tring to use cephfs for
>> MariaDB deployment,
>
> Don't do this.
> As the old saying goes: If it hurts, stop doing it.
Why
On 29.03.2018 09:50, ouyangxu wrote:
> I'm using Ceph 12.2.4 with CentOS 7.4, and tring to use cephfs for
> MariaDB deployment,
Don't do this.
As the old saying goes: If it hurts, stop doing it.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
Hi Ceph users:
I'm using Ceph 12.2.4 with CentOS 7.4, and tring to use cephfs for MariaDB
deployment, the configuration is default, but got very pool performance during
creating tables, if I use the local file system, not this issue.
Here is the sql scripts I used:
[root@cmv01cn01]$ cat
On Tue, May 9, 2017 at 9:07 PM, Brady Deetz wrote:
> So with email, you're talking about lots of small reads and writes. In my
> experience with dicom data (thousands of 20KB files per directory), cephfs
> doesn't perform very well at all on platter drivers. I haven't
Readding list:
So with email, you're talking about lots of small reads and writes. In my
experience with dicom data (thousands of 20KB files per directory), cephfs
doesn't perform very well at all on platter drivers. I haven't experimented
with pure ssd configurations, so I can't comment on that.
On Tue, May 9, 2017 at 4:40 PM, Brett Niver wrote:
> What is your workload like? Do you have a single or multiple active
> MDS ranks configured?
User traffic is heavy. I can't really say in terms of mb/s or iops but it's
an email server with 25k+ users, usually about 6k
> Op 9 mei 2017 om 20:26 schreef Brady Deetz :
>
>
> If I'm reading your cluster diagram correctly, I'm seeing a 1gbps
> interconnect, presumably cat6. Due to the additional latency of performing
> metadata operations, I could see cephfs performing at those speeds. Are you
>
What is your workload like? Do you have a single or multiple active
MDS ranks configured?
On Tue, May 9, 2017 at 3:10 PM, Webert de Souza Lima
wrote:
> That 1gbps link is the only option I have for those servers, unfortunately.
> It's all dedicated server rentals from
That 1gbps link is the only option I have for those servers, unfortunately.
It's all dedicated server rentals from OVH.
I don't have information regarding the internals of the vrack.
So by what you said, I understand that one should expect a performance drop
in comparison to ceph rbd using the
If I'm reading your cluster diagram correctly, I'm seeing a 1gbps
interconnect, presumably cat6. Due to the additional latency of performing
metadata operations, I could see cephfs performing at those speeds. Are you
using jumbo frames? Also are you routing?
If you're routing, the router will
Hello all,
I'm been using cephfs for a while but never really evaluated its
performance.
As I put up a new ceph cluster, I though that I should run a benchmark to
see if I'm going the right way.
By the results I got, I see that RBD performs *a lot* better in comparison
to cephfs.
The cluster is
On Thu, Aug 11, 2016 at 1:24 PM, Brett Niver wrote:
> Patrick and I had a related question yesterday, are we able to dynamically
> vary cache size to artificially manipulate cache pressure?
Yes -- at the top of MDCache::trim the max size is read straight out
of g_conf so it
Patrick and I had a related question yesterday, are we able to dynamically
vary cache size to artificially manipulate cache pressure?
On Thu, Aug 11, 2016 at 6:07 AM, John Spray wrote:
> On Thu, Aug 11, 2016 at 8:29 AM, Xiaoxi Chen
> wrote:
> > Hi ,
>
On Thu, Aug 11, 2016 at 8:29 AM, Xiaoxi Chen wrote:
> Hi ,
>
>
> Here is the slide I shared yesterday on performance meeting.
> Thanks and hoping for inputs.
>
>
> http://www.slideshare.net/XiaoxiChen3/cephfs-jewel-mds-performance-benchmark
These are definitely
29 matches
Mail list logo