FYI I encountered the same problem for krbd, removing the ec pool didn’t solve 
my problem.
I’m running 3.13

–––– 
Sébastien Han 
Cloud Engineer 

"Always give 100%. Unless you're giving blood."

Phone: +33 (0)1 49 70 99 72 
Mail: [email protected] 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

On 08 Jun 2014, at 10:19, Ilya Dryomov <[email protected]> wrote:

> On Sun, Jun 8, 2014 at 11:27 AM, Igor Krstic <[email protected]> wrote:
>> On Fri, 2014-06-06 at 17:40 +0400, Ilya Dryomov wrote:
>>> On Fri, Jun 6, 2014 at 4:34 PM, Kenneth Waegeman
>>> <[email protected]> wrote:
>>>> 
>>>> ----- Message from Igor Krstic <[email protected]> ---------
>>>>   Date: Fri, 06 Jun 2014 13:23:19 +0200
>>>>   From: Igor Krstic <[email protected]>
>>>> Subject: Re: [ceph-users] question about feature set mismatch
>>>>     To: Ilya Dryomov <[email protected]>
>>>>     Cc: [email protected]
>>>> 
>>>> 
>>>> 
>>>>> On Fri, 2014-06-06 at 11:51 +0400, Ilya Dryomov wrote:
>>>>>> 
>>>>>> On Thu, Jun 5, 2014 at 10:38 PM, Igor Krstic <[email protected]>
>>>>>> wrote:
>>>>>>> Hello,
>>>>>>> 
>>>>>>> dmesg:
>>>>>>> [  690.181780] libceph: mon1 192.168.214.102:6789 feature set mismatch,
>>>>>>> my
>>>>>>> 4a042a42 < server's 504a042a42, missing 5000000000
>>>>>>> [  690.181907] libceph: mon1 192.168.214.102:6789 socket error on read
>>>>>>> [  700.190342] libceph: mon0 192.168.214.101:6789 feature set mismatch,
>>>>>>> my
>>>>>>> 4a042a42 < server's 504a042a42, missing 5000000000
>>>>>>> [  700.190481] libceph: mon0 192.168.214.101:6789 socket error on read
>>>>>>> [  710.194499] libceph: mon1 192.168.214.102:6789 feature set mismatch,
>>>>>>> my
>>>>>>> 4a042a42 < server's 504a042a42, missing 5000000000
>>>>>>> [  710.194633] libceph: mon1 192.168.214.102:6789 socket error on read
>>>>>>> [  720.201226] libceph: mon1 192.168.214.102:6789 feature set mismatch,
>>>>>>> my
>>>>>>> 4a042a42 < server's 504a042a42, missing 5000000000
>>>>>>> [  720.201482] libceph: mon1 192.168.214.102:6789 socket error on read
>>>>>>> 
>>>>>>> 5000000000 should be:
>>>>>>> CEPH_FEATURE_CRUSH_V2 36 1000000000
>>>>>>> and
>>>>>>> CEPH_FEATURE_OSD_ERASURE_CODES 38 4000000000
>>>>>>> CEPH_FEATURE_OSD_TMAP2OMAP 38* 4000000000
>>>>>>> 
>>>>>>> That is happening on  two separate boxes that are just my nfs and block
>>>>>>> gateways (they are not osd/mon/mds). So I just need on them something
>>>>>>> like:
>>>>>>> sudo rbd map share2
>>>>>>> sudo mount -t xfs /dev/rbd1 /mnt/share2
>>>>>>> 
>>>>>>> On ceph cluster and on those two separate boxes:
>>>>>>> ~$ ceph -v
>>>>>>> ceph version 0.80.1
>>>>>>> 
>>>>>>> What could be the problem?
>>>>>> 
>>>>>> Which kernel version are you running?  Do you have any erasure coded
>>>>>> pools?
>>>>>> 
>>>>>> Thanks,
>>>>>> 
>>>>>>                Ilya
>>>>> 
>>>>> ~$ uname -a
>>>>> Linux ceph-gw1 3.13.0-24-generic #47~precise2-Ubuntu SMP Fri May 2
>>>>> 23:30:46 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>>>>> 
>>>>> Yes, one of the pools is erasure coded pool but the only thing I use on
>>>>> that box is rbd and rbd pool is not ec pool. It is replicated pool. I to
>>>>> not touch ec pool from there. Or at least, I believe so :)
>>>> 
>>>> 
>>>> Well, I saw something similar with CephFS: I didn't touch the pools in use
>>>> by cephfs, but I created another pool with Erasure Code, and the ceph 
>>>> client
>>>> (kernel 3.13, but not enough for EC) also stopped working with 'feature set
>>>> mismatch'. Thus I guess the clients can't read the crushmap anymore when
>>>> there is a 'erasure' mentioning in it:)
>>> 
>>> Unfortunately that's true.  If there are any erasure code pools in the
>>> cluster, kernel clients (both krbd and kcephfs) won't work.  The only
>>> way it will work is if you remove all erasure coded pools.
>>> 
>>> CRUSH_V2 is also not present in 3.13.  You'll have to uprade to 3.14.
>>> Alternatively, CRUSH_V2 can be disabled, but I can't tell you how off
>>> the top of my head.  The fundamental problem is that you are running
>>> latest userspace, and the defaults it ships with are incompatible with
>>> older kernels.
>>> 
>>> Thanks,
>>> 
>>>                Ilya
>> 
>> Thanks. Upgrade to 3.14 solved CRUSH_V2.
>> 
>> Regarding krbd and kcephfs...
>> 
>> If that is the case, that is something that should be addressed in
>> documentation on ceph.com more clearly.
>> There is info that CRUSH_TUNABLES3 (chooseleaf_vary_r) requires "Linux
>> kernel version v3.15 or later (for the file system and RBD kernel
>> clients)" but nothing else.
> 
> I think CRUSH_V2 is also mentioned somewhere, most probably in the
> release notes, but you are right, it should centralized and easy to
> find.
> 
>> 
>> Only now that I have your information I was able to find
>> https://lkml.org/lkml/2014/4/7/257
>> 
>> Anyway... What I want to test is SSD pool as cache pool in front of ec
>> pool. Is there some way to update krbd manually (from github?) or I need
>> to wait 3.15 for this?
> 
> The "if there are any erasure code pools in the cluster, kernel clients
> (both krbd and kcephfs) won't work" problem is getting fixed on the
> server side.  The next ceph release will have the fix and you will be
> able to use 3.14 kernel with clusters that have EC pools.
> 
> Thanks,
> 
>                Ilya
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to