Ilya, hi. Maybe you have the required patches for the kernel?

2014-03-25 14:51 GMT+04:00 Ирек Фасихов <[email protected]>:

> Yep, so works.
>
>
> 2014-03-25 14:45 GMT+04:00 Ilya Dryomov <[email protected]>:
>
> On Tue, Mar 25, 2014 at 12:00 PM, Ирек Фасихов <[email protected]> wrote:
>> > Hmmm, create another image in another pool. Pool without cache tier.
>> >
>> > [root@ceph01 cluster]# rbd create test/image --size 102400
>> > [root@ceph01 cluster]# rbd -p test ls -l
>> > NAME     SIZE PARENT FMT PROT LOCK
>> > image 102400M          1
>> > [root@ceph01 cluster]# ceph osd dump | grep test
>> > pool 4 'test' replicated size 3 min_size 2 crush_ruleset 0 object_hash
>> > rjenkins pg_num 100 pgp_num 100 last_change 2049 owner 0 flags
>> hashpspool
>> > stripe_width 0
>> >
>> > Get the same error...
>> >
>> > [root@ceph01 cluster]# rbd map -p test image
>> > rbd: add failed: (5) Input/output error
>> >
>> > Mar 25 13:53:56 ceph01 kernel: libceph: client11343 fsid
>> > 10b46114-ac17-404e-99e3-69b34b85c901
>> > Mar 25 13:53:56 ceph01 kernel: libceph: got v 13 cv 11 > 9 of
>> ceph_pg_pool
>> > Mar 25 13:53:56 ceph01 kernel: libceph: osdc handle_map corrupt msg
>> > Mar 25 13:53:56 ceph01 kernel: libceph: mon0 192.168.100.201:6789session
>> > established
>> >
>> > Maybe I'm doing wrong?
>>
>> No, the problem here is that the pool with hit_set stuff set still
>> exists and therefore is present in the osdmap.  You'll have to remove
>> that pool with something like
>>
>> # I assume "cache" is the name of the cache pool
>> ceph osd tier remove-overlay rbd
>> ceph osd tier remove rbd cache
>> ceph osd pool delete cache cache --yes-i-really-really-mean-it
>>
>> in order to be able to map test/image.
>>
>> Thanks,
>>
>>                 Ilya
>>
>
>
>
> --
> С уважением, Фасихов Ирек Нургаязович
> Моб.: +79229045757
>



-- 
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to