It seems that temperature / recency estimation haven't worked properly at some
point.
Cheers,
Shinobu
- Original Message -
From: "Christian Balzer"
To: ceph-users@lists.ceph.com
Sent: Thursday, April 7, 2016 11:51:38 AM
Subject: [ceph-users] Performance counters
Hello,
On Wed, 6 Apr 2016 20:35:20 +0200 Oliver Dzombic wrote:
> Hi,
>
> i have some IO issues, and after Christian's great article/hint about
> caches i plan to add caches too.
>
Thanks, version 2 is still a work in progress, as I keep running into
unknowns.
IO issues in what sense, like
Hello,
Ceph 0.94.5 for the record.
As some may remember, I phased in a 2TB cache tier 5 weeks ago.
About now it has reached about 60% usage, which is what I have the
cache_target_dirty_ratio set to.
And for the last 3 days I could see some writes (op_in_bytes) to the
backing storage (aka HDD
Hello,
On Wed, 6 Apr 2016 18:15:57 + David Turner wrote:
> You can mitigate how much it affects the IO but for the cost of how long
> it will take to complete.
>
> ceph tell osd.* injectargs '--osd-max-backfills #'
>
Also have a read of:
On Wed, Apr 6, 2016 at 10:42 PM, Scottix wrote:
> I have been running some speed tests in POSIX file operations and I noticed
> even just listing files can take a while compared to an attached HDD. I am
> wondering is there a reason it takes so long to even just list files.
If
On Wed, Apr 6, 2016 at 2:42 PM, Scottix wrote:
> I have been running some speed tests in POSIX file operations and I noticed
> even just listing files can take a while compared to an attached HDD. I am
> wondering is there a reason it takes so long to even just list files.
>
>
I have been running some speed tests in POSIX file operations and I noticed
even just listing files can take a while compared to an attached HDD. I am
wondering is there a reason it takes so long to even just list files.
Here is the test I ran
time for i in {1..10}; do touch $i; done
Hi,
i have some IO issues, and after Christian's great article/hint about
caches i plan to add caches too.
So now comes the troublesome question:
How much dangerous is it to add cache tiers in an existing cluster with
around 30 OSD's and 40 TB of Data on 3-6 ( currently reducing ) nodes.
I
You can mitigate how much it affects the IO but for the cost of how long it
will take to complete.
ceph tell osd.* injectargs '--osd-max-backfills #'
Where # is the most pgs any osd can participate backfill data for at any given
time. This is the same setting that is used when you add,
Hey cephers,
I have all but one of the presentations from Ceph Day Sunnyvale, so
rather than wait for a full hand I went ahead and posted the link to
the slides on the event page:
http://ceph.com/cephdays/ceph-day-sunnyvale/
The videos probably wont be processed until after next week, but I’ll
Will changing the replication size from 2 to 3 cause huge I/O resources
to be used, or does this happen quietly in the background?
On 2016-04-06 00:40, Christian Balzer wrote:
Hello,
Brian already mentioned a number very pertinent things, I've got a few
more:
On Tue, 05 Apr 2016 10:48:49
If you can guarantee that your write will be wholly contained within an object
(and within a stripe), you should be able to consider the writes to be atomic
between two clients since the OSD will process the two writes in sequence (all
ops are execute in order for a given placement group).
--
Thanks Jason, yes, I also do not think they can guarantee atomic in extent
level. But for a stripe unit in a object, can the atomic write be
guaranteed? thanks.
2016-04-06 19:53 GMT+08:00 Jason Dillaman :
> It's possible for a write to span one or more blocks -- it just
13 matches
Mail list logo