Hi,
We use write-around cache tier with libradosstriper-based clients. We faced
with bug which causes performance degradation:
http://tracker.ceph.com/issues/22528 . Especially if it is a lot of small
objects - sizeof(1 striper chunk). Such objects will promote on every
read/write lock:).
And
Dear all,
I have some questions about cache tier in ceph:
1. Can someone share experiences with cache tiering? What are the sensitive
things to pay attention regarding the cache tier? Can one use the same ssd for
both cache and
2. Is cache tiering supported with bluestore? Any advices for
Hi all,
Has anyone tried setting cache-tier to forward mode in luminous 12.2.1 ? Our
cluster cannot write to rados pool once the mode to set to forward. We setup
the cache-tier with forward mode and then do rados bench. However, the
throughput from rados bench is 0, and iostat shows no disk
Thanks for the answers!
As it leads to a decrease of caching efficiency, i've opened an issue:
http://tracker.ceph.com/issues/22528
15.12.2017, 23:03, "Gregory Farnum" :
> On Thu, Dec 14, 2017 at 9:11 AM, Захаров Алексей
> wrote:
>> Hi, Gregory,
>>
On Thu, Dec 14, 2017 at 9:11 AM, Захаров Алексей wrote:
> Hi, Gregory,
> Thank you for your answer!
>
> Is there a way to not promote on "locking", when not using EC pools?
> Is it possible to make this configurable?
>
> We don't use EC pool. So, for us this meachanism is
Hi, Gregory,Thank you for your answer! Is there a way to not promote on "locking", when not using EC pools?Is it possible to make this configurable? We don't use EC pool. So, for us this meachanism is overhead. It only adds more load on both pools and network. 14.12.2017, 01:16, "Gregory Farnum"
Voluntary “locking” in RADOS is an “object class” operation. These are not
part of the core API and cannot run on EC pools, so any operation using
them will cause an immediate promotion.
On Wed, Dec 13, 2017 at 4:02 AM Захаров Алексей
wrote:
> Hello,
>
> I've found that
Hello,
I've found that when client gets lock on object then ceph ignores any promotion
settings and promotes this object immedeatly.
Is it a bug or a feature?
Is it configurable?
Hope for any help!
Ceph version: 10.2.10 and 12.2.2
We use libradosstriper-based clients.
Cache pool settings:
Hey!
I have a setup with blueStore with 8 hdd and 1 nmve of 400GB por host,
how would I get more performance, by using the nmve as equal RocksDB
partitions (50GB each), setting it all as a cache tier osd for the rest
of hdd or doing a mix with less DB space like 15-25GB partitions and the
rest for
Hi, thanks for your quick response!
Do I take it from this that your cache tier is only on one node?
If so upgrade the "Risky" up there to "Channeling Murphy".
The two SSDs are on two different nodes, but since we just started
using cache tier, we decided to use a pool size of 2, we know
On Tue, 22 Aug 2017 09:54:34 + Eugen Block wrote:
> Hi list,
>
> we have a productive Hammer cluster for our OpenStack cloud and
> recently a colleague added a cache tier consisting of 2 SSDs and also
> a pool size of 2, we're still experimenting with this topic.
>
Risky, but I guess
Hi list,
we have a productive Hammer cluster for our OpenStack cloud and
recently a colleague added a cache tier consisting of 2 SSDs and also
a pool size of 2, we're still experimenting with this topic.
Now we have some hardware maintenance to do and need to shutdown
nodes, one at a
answer, please look bellow.
> >
> > > -Original Message-
> > > From: Christian Balzer [mailto:ch...@gol.com]
> > > Sent: Monday, July 3, 2017 1:39 PM
> > > To: ceph-users@lists.ceph.com
> > > Cc: Mateusz Skała <mateusz.sk...@budikom.net>
> >
lists.ceph.com
> > Cc: Mateusz Skała <mateusz.sk...@budikom.net>
> > Subject: Re: [ceph-users] Cache Tier or any other possibility to accelerate
> > RBD with SSD?
> >
> >
> > Hello,
> >
> > On Mon, 3 Jul 2017 13:01:06 +0200 Mateusz Skała
@Christian ,thanks for quick answer, please look bellow.
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Monday, July 3, 2017 1:39 PM
> To: ceph-users@lists.ceph.com
> Cc: Mateusz Skała <mateusz.sk...@budikom.net>
> Subject: Re: [ceph-
> Op 3 juli 2017 om 13:01 schreef Mateusz Skała :
>
>
> Hello,
>
> We are using cache-tier in Read-forward mode (replica 3) for accelerate
> reads and journals on SSD to accelerate writes. We are using only RBD. Based
> on the ceph-docs, RBD have bad I/O pattern for
Hello,
On Mon, 3 Jul 2017 13:01:06 +0200 Mateusz Skała wrote:
> Hello,
>
> We are using cache-tier in Read-forward mode (replica 3) for accelerate
> reads and journals on SSD to accelerate writes.
OK, lots of things wrong with this statement, but firstly, Ceph version
(it is relevant) and
Hello,
We are using cache-tier in Read-forward mode (replica 3) for accelerate
reads and journals on SSD to accelerate writes. We are using only RBD. Based
on the ceph-docs, RBD have bad I/O pattern for cache tier. I'm looking for
information about other possibility to accelerate reads on RBD
Hi community!
I'm wondering what are actual use cases for cache tiering?
Can I expect improvement of performance with scenario where I use ceph
with RBD for VMs hosting?
Current pool includes 15 OSD on 10K SAS drives with SSD journal for each
5 OSDs.
Thanks
Ruslan
Email and Anti-Spam
I seriously doubt that it's ever going to be a winning strategy to let
rgw index objects go to a cold tier. Some practical problems:
1) We don't track omap size (the leveldb entries for an object)
because it would turn writes into rmw's -- so they always show up as 0
size. Thus, the
, 2016 12:59 PM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: [ceph-users] cache tier not flushing 10.2.2
Simple issue I cant find with the cache tier. Thanks for taking the time…
Setup a new cluster with ssd cache tier. My cache tier is on 1TB ssd. With 2
repli
Simple issue I cant find with the cache tier. Thanks for taking the time…
Setup a new cluster with ssd cache tier. My cache tier is on 1TB ssd. With 2
replicas. It just fills up my cache until the ceph filesystem stops allowing
access.
I even set the target_max_bytes to 1048576 (1GB) and still
Hello,
On Wed, 20 Jul 2016 11:44:15 +0200 Mateusz Skała wrote:
[snip]
> > > > >
> > > > There are a number of other options to control things, especially with
> > Jewel.
> > > > Also setting your cache mode to readforward might be a good idea
> > > > depending on your use case.
> > > >
> > >
Thank You for quick response.
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Tuesday, July 19, 2016 3:39 PM
> To: ceph-users@lists.ceph.com
> Cc: Mateusz Skała <mateusz.sk...@budikom.net>
> Subject: Re: [ceph-users] Cache Tier con
Hello,
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Wednesday, July 13, 2016 4:03 AM
> To: ceph-users@lists.ceph.com
> Cc: Mateusz Skała <mateusz.sk...@budikom.net>
> Subject: Re: [ceph-users] Cache Tier configuration
>
>
>
; > Cc: Mateusz Skała <mateusz.sk...@budikom.net>
> > Subject: Re: [ceph-users] Cache Tier configuration
> >
> >
> > Hello,
> >
> > On Mon, 11 Jul 2016 16:19:58 +0200 Mateusz Skała wrote:
> >
> > > Hello Cephers.
> > >
>
Thank You for replay. Answers below.
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Tuesday, July 12, 2016 3:37 AM
> To: ceph-users@lists.ceph.com
> Cc: Mateusz Skała <mateusz.sk...@budikom.net>
> Subject: Re: [ceph-users]
Hello,
On Mon, 11 Jul 2016 16:19:58 +0200 Mateusz Skała wrote:
> Hello Cephers.
>
> Can someone help me in my cache tier configuration? I have 4 same SSD drives
> 176GB (184196208K) in SSD pool, how to determine target_max_bytes?
What exact SSD models are these?
What version of Ceph?
> I
Hello Cephers.
Can someone help me in my cache tier configuration? I have 4 same SSD drives
176GB (184196208K) in SSD pool, how to determine target_max_bytes? I assume
that should be (4 drives* 188616916992 bytes )/ 3 replica = 251489222656
bytes *85% (because of full disk warning)
It will be
Hi min,
just like Paul already explained.
The cache is made out of OSD's ( which have just like any other OSD's
their own journal ).
So it depends on you what structure you will build. You can place all
journals of hot and cold storage ( hot = cache, cold = regular storage )
together on same
thanks Oliver, does the journal need be committed twice? One is for write
IO to the cache tier? the other is for write IO destaged to SATA backend
pool?
2016-04-21 19:38 GMT+08:00 Oliver Dzombic :
> Hi,
>
> afaik cache does not have to do anything with journals.
>
> So
Hi,
afaik cache does not have to do anything with journals.
So your OSD's need journals, and for performance, you will take SSD's.
The Cache should be something faster than your OSD's. Usually SSD or NVMe.
The Cache is an extra Space in front of your OSD's which is supposed to
speed up things
Hi, my ceph cluster has two pools, ssd cache tier pool and SATA backend
pool. For this configuration, do I need use SSD as journal device? I do not
know whether cache tier take the journal role? thanks
___
ceph-users mailing list
eph-users <ceph-users@lists.ceph.com>
> > Subject: Re: [ceph-users] Cache tier operation clarifications
> >
> >
> > Hello,
> >
> > I'd like to get some insights, confirmations from people here who are
> either
> > familiar with the code or have this
Hi Christian,
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Christian Balzer
> Sent: 07 March 2016 02:22
> To: ceph-users <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] Cache tier operation clarifications
Hello,
I'd like to get some insights, confirmations from people here who are
either familiar with the code or have this tested more empirically than me
(the VM/client node of my test cluster is currently pinning for the
fjords).
When it comes to flushing/evicting we already established that
On Sat, 5 Mar 2016 06:08:49 +0100 Francois Lafont wrote:
> Hello,
>
> On 04/03/2016 09:17, Christian Balzer wrote:
>
> > Unlike the subject may suggest, I'm mostly going to try and explain how
> > things work with cache tiers, as far as I understand them.
> > Something of a reference to point
Hi Christian,
great work !
Before, i didnt even know that there exist the possibility of building
seperate caching pools.
Thank you very much for your community contribution !
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
Hello,
On 04/03/2016 09:17, Christian Balzer wrote:
> Unlike the subject may suggest, I'm mostly going to try and explain how
> things work with cache tiers, as far as I understand them.
> Something of a reference to point to. [...]
I'm currently unqualified concerning cache tiering but I'm
Great feedback (at least for me).
I would like to know if the behaviours you seeing are expected things or not.
BTW I will do some test regarding to cache tier with my new toy.
Cheers,
S
On Fri, Mar 4, 2016 at 5:17 PM, Christian Balzer wrote:
>
> Hello,
>
> Unlike the subject
Hello,
Unlike the subject may suggest, I'm mostly going to try and explain how
things work with cache tiers, as far as I understand them.
Something of a reference to point to.
Of course if you spot something that's wrong or have additional
information, by all means please do comment.
While the
Interesting... see below
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Christian Balzer
> Sent: 01 March 2016 08:20
> To: ceph-users@lists.ceph.com
> Cc: Nick Fisk <n...@fisk.me.uk>
> Subject: Re: [ceph-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> > > Of Christian Balzer
> > > Sent: 26 February 2016 09:07
> > > To: ceph-users@lists.ceph.com
> > > Subject: [ceph-users] Cache tier weirdness
> > >
> > >
> > > Hello,
>
eph-users@lists.ceph.com
> > Subject: [ceph-users] Cache tier weirdness
> >
> >
> > Hello,
> >
> > still my test cluster with 0.94.6.
> > It's a bit fuzzy, but I don't think I saw this with Firefly, but then
> again that is
> > totally broken when it
Hi Christian,
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Christian Balzer
> Sent: 26 February 2016 09:07
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] Cache tier weirdness
>
>
> Hello,
Hello,
still my test cluster with 0.94.6.
It's a bit fuzzy, but I don't think I saw this with Firefly, but then
again that is totally broken when it comes to cache tiers (switching
between writeback and forward mode).
goat is a cache pool for rbd:
---
# ceph osd pool ls detail
pool 2 'rbd'
Hi Brian,
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Brian Kroth
> Sent: 23 October 2015 21:31
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] cache tier write-back upper bound?
>
> Hi, I'm wondering
Hi, I'm wondering when using a cache pool tier if there's an upper bound
on when something written to the cache is flushed back to the backing
pool? Something like a cache_max_flush_age setting? Basically I'm
wondering if I have the unfortunate case of all of the SSD replicas for
a cache
Hello,
On Wed, 07 Oct 2015 07:34:16 +0200 Loic Dachary wrote:
> Hi Christian,
>
> Interesting use case :-) How many OSDs / hosts do you have ? And how are
> they connected together ?
>
If you look far back in the archives you'd find that design.
And of course there will be a lot of "I told
Hi Christian,
On 07.10.2015 09:04, Christian Balzer wrote:
>
> ...
>
> My main suspect for the excessive slowness are actually the Toshiba DT
> type drives used.
> We only found out after deployment that these can go into a zombie mode
> (20% of their usual performance for ~8 hours if not
Hello Udo,
On Wed, 07 Oct 2015 11:40:11 +0200 Udo Lembke wrote:
> Hi Christian,
>
> On 07.10.2015 09:04, Christian Balzer wrote:
> >
> > ...
> >
> > My main suspect for the excessive slowness are actually the Toshiba DT
> > type drives used.
> > We only found out after deployment that these
Hi Christian,
Interesting use case :-) How many OSDs / hosts do you have ? And how are they
connected together ?
Cheers
On 07/10/2015 04:58, Christian Balzer wrote:
>
> Hello,
>
> a bit of back story first, it may prove educational for others a future
> generations.
>
> As some may recall,
Hello,
a bit of back story first, it may prove educational for others a future
generations.
As some may recall, I have a firefly production cluster with a storage node
design that was both optimized for the use case at the time and with an
estimated capacity to support 140 VMs (all running the
Hi Everyone,
Getting close to cracking my understanding of cache tiering, and ec pools.
Stuck on one anomaly which I do not understand — spent hours reviewing docs
online, can’t seem to pin point what I’m doing wrong. Referencing
http://ceph.com/docs/master/rados/operations/cache-tiering/
Subject: [ceph-users] Cache tier full not evicting
Hi Everyone,
Getting close to cracking my understanding of cache tiering, and ec pools.
Stuck on one anomaly which I do not understand — spent hours reviewing docs
online, can’t seem to pin point what I’m doing wrong. Referencing
http
Hi everyone,
I has been used the cache-tier on a data pool.
After a long time, a lot of rbd images don't be displayed in rbd -p
data ls.
Although that Images still show through rbd info and rados ls command.
rbd -p data info volume-008ae4f7-3464-40c0-80b0-51140d8b95a8
rbd image
: [ceph-users] Cache tier best practices
Hi,
I would like to hear from people who use cache tier in Ceph about best
practices and things I should avoid.
I remember hearing that it wasn't that stable back then. Has it
changed in
Hammer release?
It's not so much the stability
Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Dominik Zalewski
Sent: 12 August 2015 14:40
To: ceph-us...@ceph.com
Subject: [ceph-users] Cache tier best practices
Hi,
I would like to hear from people who use cache tier in Ceph about best
Use the order parameter when creating an RBD 22=4MB, 20=1MB
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Vickey
Singh
Sent: 13 August 2015 09:31
To: Nick Fisk n...@fisk.me.uk
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] Cache tier best practices
Thanks
: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
Of
Dominik Zalewski
Sent: 12 August 2015 14:40
To: ceph-us...@ceph.com
Subject: [ceph-users] Cache tier best practices
Hi,
I would like to hear from people who use cache tier in Ceph about best
practices and things
Hi,
I would like to hear from people who use cache tier in Ceph about best
practices and things I should avoid.
I remember hearing that it wasn't that stable back then. Has it changed in
Hammer release?
Any tips and tricks are much appreciated!
Thanks
Dominik
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Dominik Zalewski
Sent: 12 August 2015 14:40
To: ceph-us...@ceph.com
Subject: [ceph-users] Cache tier best practices
Hi,
I would like to hear from people who use cache tier in Ceph about
Hello all,
We are trying to run some tests on a cache-tier Ceph cluster, but
we are encountering serious problems, which eventually lead the cluster
unusable.
We are apparently doing something wrong, but we have no idea of
what it could be. We'd really appreciate if someone could point us what
Hi Xavier
see comments inline
JC
On 16 Apr 2015, at 23:02, Xavier Serrano xserrano+c...@ac.upc.edu wrote:
Hello all,
We are trying to run some tests on a cache-tier Ceph cluster, but
we are encountering serious problems, which eventually lead the cluster
unusable.
We are apparently
Hi,
ceph version 0.87.1
thanks
best regards
-Original message-
From: Chu Duc Minh chu.ducm...@gmail.com
Sent: Thursday 9th April 2015 15:03
To: Patrik Plank pat...@plank.me
Cc: ceph-users@lists.ceph.com ceph-users@lists.ceph.com
ceph-users@lists.ceph.com
Subject: Re: [ceph-users
Hi,
i have build a cach-tier pool (replica 2) with 3 x 512gb ssd for my kvm pool.
these are my settings :
ceph osd tier add kvm cache-pool
ceph osd tier cache-mode cache-pool writeback
ceph osd tier set-overlay kvm cache-pool
ceph osd pool set cache-pool hit_set_type bloom
ceph osd
Hi,
set the cache-tier size to 644245094400.
This should work.
But it is the same.
thanks
regards
-Original message-
From: Gregory Farnum g...@gregs42.com
Sent: Thursday 9th April 2015 15:44
To: Patrik Plank pat...@plank.me
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users
On Thu, Apr 9, 2015 at 4:56 AM, Patrik Plank pat...@plank.me wrote:
Hi,
i have build a cach-tier pool (replica 2) with 3 x 512gb ssd for my kvm
pool.
these are my settings :
ceph osd tier add kvm cache-pool
ceph osd tier cache-mode cache-pool writeback
ceph osd tier set-overlay kvm
What ceph version do you use?
Regards,
On 9 Apr 2015 18:58, Patrik Plank pat...@plank.me wrote:
Hi,
i have build a cach-tier pool (replica 2) with 3 x 512gb ssd for my kvm
pool.
these are my settings :
ceph osd tier add kvm cache-pool
ceph osd tier cache-mode cache-pool writeback
Hello,
On Wed, 18 Mar 2015 11:05:47 -0700 Gregory Farnum wrote:
On Wed, Mar 18, 2015 at 8:04 AM, Nick Fisk n...@fisk.me.uk wrote:
Hi Greg,
Thanks for your input and completely agree that we cannot expect
developers to fully document what impact each setting has on a
cluster,
I think this could be part of what I am seeing. I found this post from back in
2003
http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/12083
Which seems to describe a work around for the behaviour to what I am seeing.
The constant small block IO I was seeing looks like it was either
On Wed, Mar 18, 2015 at 11:10 PM, Christian Balzer ch...@gol.com wrote:
Hello,
On Wed, 18 Mar 2015 11:05:47 -0700 Gregory Farnum wrote:
On Wed, Mar 18, 2015 at 8:04 AM, Nick Fisk n...@fisk.me.uk wrote:
Hi Greg,
Thanks for your input and completely agree that we cannot expect
:57
To: Christian Balzer
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Cache Tier Flush = immediate base tier journal
sync?
On Mon, Mar 16, 2015 at 4:46 PM, Christian Balzer ch...@gol.com wrote:
On Mon, 16 Mar 2015 16:09:12 -0700 Gregory Farnum wrote:
Nothing here particularly
On Wed, Mar 18, 2015 at 8:04 AM, Nick Fisk n...@fisk.me.uk wrote:
Hi Greg,
Thanks for your input and completely agree that we cannot expect developers
to fully document what impact each setting has on a cluster, particularly in
a performance related way
That said, if you or others could
On Wed, Mar 11, 2015 at 2:25 PM, Nick Fisk n...@fisk.me.uk wrote:
I’m not sure if it’s something I’m doing wrong or just experiencing an
oddity, but when my cache tier flushes dirty blocks out to the base tier, the
writes seem to hit the OSD’s straight away instead of coalescing in the
-users-boun...@lists.ceph.com] On Behalf
Of Gregory Farnum
Sent: 16 March 2015 17:33
To: Nick Fisk
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Cache Tier Flush = immediate base tier
journal sync?
On Wed, Mar 11, 2015 at 2:25 PM, Nick Fisk n...@fisk.me.uk wrote:
I’m
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Gregory Farnum
Sent: 16 March 2015 17:33
To: Nick Fisk
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Cache Tier Flush = immediate base tier journal
sync?
On Wed, Mar 11
: 16 March 2015 17:33
To: Nick Fisk
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Cache Tier Flush = immediate base tier journal
sync?
On Wed, Mar 11, 2015 at 2:25 PM, Nick Fisk n...@fisk.me.uk wrote:
I’m not sure if it’s something I’m doing wrong or just experiencing an
oddity
I'm not sure if it's something I'm doing wrong or just experiencing an
oddity, but when my cache tier flushes dirty blocks out to the base tier,
the writes seem to hit the OSD's straight away instead of coalescing in the
journals, is this correct?
For example if I create a RBD on a standard 3 way
Hello!
If I using cache tier 1 pool in writeback mode, it is a good idea turn
off journal on OSDs?
I think in this sutuation journal can help if you are hit a rebalance
procedure in a cold storage. In outer situation the journal is
useless, I think.
Any comments?
-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Jean-Charles Lopez
Sent: 09 November 2014 01:43
To: Nick Fisk
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Cache Tier Statistics
Hi Nick
If my brain doesn't fail me you can try
ceph daemon osd.{id} perf dump
ceph report
Hi,
Does anyone know if there any statistics available specific to the cache
tier functionality, I'm thinking along the lines of cache hit ratios? Or
should I be pulling out the Read statistics for backing+cache pools and
assuming that if a read happens from the backing pool it was a miss and
Hi Nick
If my brain doesn't fail me you can try
ceph daemon osd.{id} perf dump
ceph report (not 100% sure if cache stats are in
Rgds
JC
On Saturday, November 8, 2014, Nick Fisk n...@fisk.me.uk wrote:
Hi,
Does anyone know if there any statistics available specific to the cache
tier
Thanks JC , it worked , now cache tiering agent is migrating data between
tiers.
But Now , i am seeing a new ISSUE : Cache-pool has got some EXTRA objects ,
that is not visible with # rados -p cache-pool ls but under #ceph df i can see
the count of those objects.
[root@ceph-node1 ~]# ceph
Hi Karan,
may be the statistical information about the temperature of the objects in the
pool that has to be maintained by the flushing/eviciting agent?
Otherwise, hard to say from my seat.
JC
On Sep 14, 2014, at 08:05, Karan Singh karan.si...@csc.fi wrote:
Thanks JC , it worked , now
Hello Cephers
I have created a Cache pool and looks like cache tiering agent is not able to
flush/evict data as per defined policy. However when i manually evict / flush
data , it migrates data from cache-tier to storage-tier
Kindly advice if there is something wrong with policy or anything
Hi Karan,
May be setting the dirty byte ratio (flush) and the full ratio (eviction). Just
try to see if it makes any difference
- cache_target_dirty_ratio .1
- cache_target_full_ratio .2
Tune the percentage as desired relatively to target_max_bytes and
target_max_objects. The first threshold
Another problem that I faced is that when I start a benchmark that creates
loads of workload, the cache-pool does not evict/flush objects during the
workload generation. However, I set the target_max_objects to certain amount,
it does not flush/evict the objects during workload generation! So
Hi Greg,
Thanks for your prompt reply. I appreciate, if you could also help me with the
following issues:
1) After mounting a directory to a pool called cold-pool, I started to save
data through CephFS. By removing all of the created files from CephFS, I could
not remove objects from the
1) it will take time for the deleted objects to flush out of the cache pool
and then be deleted in the cold pool. They will disappear eventually,
though!
2) you can't delete pools which are in the MDSMap.
On Thursday, June 19, 2014, Sherry Shahbazi shoo...@yahoo.com wrote:
Hi Greg,
Thanks for
On Wed, Jun 18, 2014 at 12:54 AM, Sherry Shahbazi shoo...@yahoo.com wrote:
Hi everyone,
If I have a pool called cold-storage (1) and a pool called hot-storage (2)
that hot-storage is a cache tier for the cold-storage.
I normally do the followings in order to map a directory in my client to a
Hi everyone,
If I have a pool called cold-storage (1) and a pool called hot-storage (2) that
hot-storage is a cache tier for the cold-storage.
I normally do the followings in order to map a directory in my client to a pool.
on a Ceph monitor,
ceph mds add_data_pool 1
ceph mds add_data_pool 2
92 matches
Mail list logo