Thanks JC , it worked , now cache tiering agent is migrating data between
tiers.
But Now , i am seeing a new ISSUE : Cache-pool has got some EXTRA objects ,
that is not visible with # rados -p cache-pool ls but under #ceph df i can see
the count of those objects.
[root@ceph-node1 ~]# ceph df | egrep -i "objects|pool"
POOLS:
NAME ID USED %USED OBJECTS
EC-pool 15 1000M 1.21 2
cache-pool 16 252 0 3
[root@ceph-node1 ~]#
[root@ceph-node1 ~]# rados -p cache-pool ls
[root@ceph-node1 ~]# rados -p cache-pool cache-flush-evict-all
[root@ceph-node1 ~]# rados -p cache-pool ls
[root@ceph-node1 ~]# ceph df | egrep -i "objects|pool"
POOLS:
NAME ID USED %USED OBJECTS
EC-pool 15 1000M 1.21 2
cache-pool 16 252 0 3
[root@ceph-node1 ~]#
# Also when i create ONE object manually , #ceph df says that 2 objects has
been added. From where this extra object coming
[root@ceph-node1 ~]# ceph df | egrep -i "objects|pool"
POOLS:
NAME ID USED %USED OBJECTS
EC-pool 15 1000M 1.21 2
cache-pool 16 252 0 3
[root@ceph-node1 ~]#
[root@ceph-node1 ~]#
[root@ceph-node1 ~]# rados -p cache-pool put test /etc/hosts ( I have
added one object in this step )
[root@ceph-node1 ~]# rados -p cache-pool ls
( when i list i can see only 1 object that i have recently created)
test
[root@ceph-node1 ~]# ceph df | egrep -i "objects|pool"
POOLS:
NAME ID USED %USED OBJECTS
EC-pool 15 1000M 1.21 2
cache-pool 16 651 0 5
(Why it is showing 5 objects , while earlier its showing 3
Objects , why it has increased by 2 on adding only 1 object )
[root@ceph-node1 ~]#
- Karan -
On 14 Sep 2014, at 03:42, Jean-Charles LOPEZ <[email protected]> wrote:
> Hi Karan,
>
> May be setting the dirty byte ratio (flush) and the full ratio (eviction).
> Just try to see if it makes any difference
> - cache_target_dirty_ratio .1
> - cache_target_full_ratio .2
>
> Tune the percentage as desired relatively to target_max_bytes and
> target_max_objects. The first threshold reached will trigger flush or
> eviction (num objects or num bytes)
>
> JC
>
>
>
> On Sep 13, 2014, at 15:23, Karan Singh <[email protected]> wrote:
>
>> Hello Cephers
>>
>> I have created a Cache pool and looks like cache tiering agent is not able
>> to flush/evict data as per defined policy. However when i manually evict /
>> flush data , it migrates data from cache-tier to storage-tier
>>
>> Kindly advice if there is something wrong with policy or anything else i am
>> missing.
>>
>> Ceph Version: 0.80.5
>> OS : Cent OS 6.4
>>
>> Cache pool created using the following commands :
>>
>> ceph osd tier add data cache-pool
>> ceph osd tier cache-mode cache-pool writeback
>> ceph osd tier set-overlay data cache-pool
>> ceph osd pool set cache-pool hit_set_type bloom
>> ceph osd pool set cache-pool hit_set_count 1
>> ceph osd pool set cache-pool hit_set_period 300
>> ceph osd pool set cache-pool target_max_bytes 10000
>> ceph osd pool set cache-pool target_max_objects 100
>> ceph osd pool set cache-pool cache_min_flush_age 60
>> ceph osd pool set cache-pool cache_min_evict_age 60
>>
>>
>> [root@ceph-node1 ~]# date
>> Sun Sep 14 00:49:59 EEST 2014
>> [root@ceph-node1 ~]# rados -p data put file1 /etc/hosts
>> [root@ceph-node1 ~]# rados -p data ls
>> [root@ceph-node1 ~]# rados -p cache-pool ls
>> file1
>> [root@ceph-node1 ~]#
>>
>>
>> [root@ceph-node1 ~]# date
>> Sun Sep 14 00:59:33 EEST 2014
>> [root@ceph-node1 ~]# rados -p data ls
>> [root@ceph-node1 ~]#
>> [root@ceph-node1 ~]# rados -p cache-pool ls
>> file1
>> [root@ceph-node1 ~]#
>>
>>
>> [root@ceph-node1 ~]# date
>> Sun Sep 14 01:08:02 EEST 2014
>> [root@ceph-node1 ~]# rados -p data ls
>> [root@ceph-node1 ~]# rados -p cache-pool ls
>> file1
>> [root@ceph-node1 ~]#
>>
>>
>>
>> [root@ceph-node1 ~]# rados -p cache-pool cache-flush-evict-all
>> file1
>> [root@ceph-node1 ~]#
>> [root@ceph-node1 ~]# rados -p data ls
>> file1
>> [root@ceph-node1 ~]# rados -p cache-pool ls
>> [root@ceph-node1 ~]#
>>
>>
>> Regards
>> Karan Singh
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com