It looks like you're using a kernel RBD mount in the second case? I imagine
your kernel doesn't support caching pools and you'd need to upgrade for it
to work.
-Greg

On Tuesday, July 1, 2014, Никитенко Виталий <v1...@yandex.ru> wrote:

> Good day!
> I have server with Ubunu 14.04 and installed ceph firefly. Configured
> main_pool (2 osd) and ssd_pool (1 ssd osd). I want use ssd_pool as cache
> pool for main_pool
>
>   ceph osd tier add main_pool ssd_pool
>   ceph osd tier cache-mode ssd_pool writeback
>   ceph osd tier set-overlay main_pool ssd_pool
>
>   ceph osd pool set ssd_pool hit_set_type bloom
>   ceph osd pool set ssd_pool hit_set_count 1
>   ceph osd pool set ssd_pool hit_set_period 600
>   ceph osd pool set ssd_pool target_max_bytes 100000000000
>
>  If use tgt as:
>  tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 --bstype
> rbd --backing-store main_pool/store_main --bsopts "conf=/etc/ceph/ceph.conf"
>  and then connected from iscsi initiator to this Lun1, i see that ssd_pool
> is used as cache (i see through iostat -x 1) but slow speed
>
>  If use tgt as (or other sush as scst, iscsitarget):
>  tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 -b
> /dev/rbd1 (where rbd1=main_pool/store_main)
>  and then connected from iscsi initiator to this Lun1, i see that ssd_pool
> is not used, that write through to 2 osd
>
>  Help me, anyone work this iscsi and cache pool?
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <javascript:;>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to