Re: [ceph-users] How to migrate ms_type to async ?

2018-01-25 Thread
I’ve test that, and yes. The cluster works fine with some node using 
async-message while others using simple-message.
Thank you, Greg.



Yes, you can (and should!) restart them one at a time to avoid downtime.

On Thu, Jan 25, 2018 at 9:57 AM 周 威 <cho...@msn.cn<mailto:cho...@msn.cn>> wrote:
Hi Greg,

Can I restart daemons one by one online?  Or there will be a down time before I 
restart all daemons?

Thanks
Choury


发件人: Gregory Farnum <gfar...@redhat.com<mailto:gfar...@redhat.com>>
发送时间: 2018年1月25日 16:52:46
收件人: 周 威
抄送: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
主题: Re: [ceph-users] How to migrate ms_type to async ?

You just have the set the config option ("ms_type = async", I think?) and 
restart daemons. Both messengers use the same protocol and there's no migration 
in terms of data or setting up compatibility.
-Greg

On Thu, Jan 25, 2018 at 9:43 AM 周 威 
<cho...@msn.cn<mailto:cho...@msn.cn><mailto:cho...@msn.cn<mailto:cho...@msn.cn>>>
 wrote:
Hi all,

I have a cluster in jewel which is upgraded from hammer , I want to migrate 
ms_type to async, but I can't find documents about it. Does somebody know how 
to do that?

Thanks
Choury
___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com><mailto:ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.ceph.com%2Flistinfo.cgi%2Fceph-users-ceph.com=02%7C01%7C%7C78255b2ca42d4be5c54908d563d202fd%7C84df9e7fe9f640afb435%7C1%7C0%7C636524675993666142=hV2ShUJvBf2ZHlZdyUD25euCkE149XMwQg98JfGYsjY%3D=0><https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.ceph.com%2Flistinfo.cgi%2Fceph-users-ceph.com=02%7C01%7C%7Cfa2ee9014c2a4f53320108d563d10846%7C84df9e7fe9f640afb435%7C1%7C0%7C636524671788911583=CaBajf7ktaQaSZtF8OW8WievRj11doP7UUHQdfQKe%2BI%3D=0>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 答复: How to migrate ms_type to async ?

2018-01-25 Thread
Hi Greg,

Can I restart daemons one by one online?  Or there will be a down time before I 
restart all daemons?

Thanks
Choury


发件人: Gregory Farnum <gfar...@redhat.com>
发送时间: 2018年1月25日 16:52:46
收件人: 周 威
抄送: ceph-users@lists.ceph.com
主题: Re: [ceph-users] How to migrate ms_type to async ?

You just have the set the config option ("ms_type = async", I think?) and 
restart daemons. Both messengers use the same protocol and there's no migration 
in terms of data or setting up compatibility.
-Greg

On Thu, Jan 25, 2018 at 9:43 AM 周 威 <cho...@msn.cn<mailto:cho...@msn.cn>> wrote:
Hi all,

I have a cluster in jewel which is upgraded from hammer , I want to migrate 
ms_type to async, but I can't find documents about it. Does somebody know how 
to do that?

Thanks
Choury
___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.ceph.com%2Flistinfo.cgi%2Fceph-users-ceph.com=02%7C01%7C%7Cfa2ee9014c2a4f53320108d563d10846%7C84df9e7fe9f640afb435%7C1%7C0%7C636524671788911583=CaBajf7ktaQaSZtF8OW8WievRj11doP7UUHQdfQKe%2BI%3D=0>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to migrate ms_type to async ?

2018-01-25 Thread
Hi all,

I have a cluster in jewel which is upgraded from hammer , I want to migrate 
ms_type to async, but I can't find documents about it. Does somebody know how 
to do that?

Thanks
Choury
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 答复: 答复: 答复: Can't delete file in cephfs with "No space left on device"

2017-12-26 Thread
The client they are using is mainly fuse(10.2.9 and 0.94.9)

-邮件原件-
发件人: Yan, Zheng [mailto:uker...@gmail.com] 
发送时间: 2017年12月27日 10:32
收件人: 周 威 <cho...@msn.cn>
抄送: Cary <dynamic.c...@gmail.com>; ceph-users@lists.ceph.com
主题: Re: [ceph-users] 答复: 答复: Can't delete file in cephfs with "No space left on 
device"

On Tue, Dec 26, 2017 at 2:28 PM, 周 威 <cho...@msn.cn> wrote:
> We don't use hardlink.
> I reduced the mds_cache_size from 1000 to 200.
> After that, the num_strays reduce to about 100k The cluster is normal 
> now. I think there is some bug about it.
> Anyway, thanks for your reply!
>

This seems like a client bug. which client do you use (kclient or fuse, 
version)?


> -邮件原件-
> 发件人: Cary [mailto:dynamic.c...@gmail.com]
> 发送时间: 2017年12月26日 14:08
> 收件人: 周 威 <cho...@msn.cn>
> 抄送: ceph-users@lists.ceph.com
> 主题: Re: 答复: [ceph-users] Can't delete file in cephfs with "No space left on 
> device"
>
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists
> .ceph.com%2Fpipermail%2Fceph-users-ceph.com%2F2016-October%2F013646.ht
> ml=02%7C01%7C%7C571893beb17d4d643a8e08d54c26fbed%7C84df9e7fe9f640
> afb435%7C1%7C0%7C636498652669125646=2Y9JT%2BoksSglve
> XPttiR7y3nwAmADhLwxTUoH4lxQAI%3D=0
>
> On Tue, Dec 26, 2017 at 6:07 AM, Cary <dynamic.c...@gmail.com> wrote:
>> Are you using hardlinks in cephfs?
>>
>>
>> On Tue, Dec 26, 2017 at 3:42 AM, 周 威 <cho...@msn.cn> wrote:
>>> The out put of ceph osd df
>>>
>>>
>>>
>>> ID WEIGHT  REWEIGHT SIZE   USEAVAIL  %USE  VAR  PGS
>>>
>>> 0 1.62650  1.0  1665G  1279G   386G 76.82 1.05 343
>>>
>>> 1 1.62650  1.0  1665G  1148G   516G 68.97 0.94 336
>>>
>>> 2 1.62650  1.0  1665G  1253G   411G 75.27 1.03 325
>>>
>>> 3 1.62650  1.0  1665G  1192G   472G 71.60 0.98 325
>>>
>>> 4 1.62650  1.0  1665G  1205G   460G 72.35 0.99 341
>>>
>>> 5 1.62650  1.0  1665G  1381G   283G 82.95 1.13 364
>>>
>>> 6 1.62650  1.0  1665G  1069G   595G 64.22 0.88 322
>>>
>>> 7 1.62650  1.0  1665G  1222G   443G 73.38 1.00 337
>>>
>>> 8 1.62650  1.0  1665G  1120G   544G 67.29 0.92 312
>>>
>>> 9 1.62650  1.0  1665G  1166G   498G 70.04 0.96 336
>>>
>>> 10 1.62650  1.0  1665G  1254G   411G 75.31 1.03 348
>>>
>>> 11 1.62650  1.0  1665G  1352G   313G 81.19 1.11 341
>>>
>>> 12 1.62650  1.0  1665G  1174G   490G 70.52 0.96 328
>>>
>>> 13 1.62650  1.0  1665G  1281G   383G 76.95 1.05 345
>>>
>>> 14 1.62650  1.0  1665G  1147G   518G 68.88 0.94 339
>>>
>>> 15 1.62650  1.0  1665G  1236G   429G 74.24 1.01 334
>>>
>>> 20 1.62650  1.0  1665G  1166G   499G 70.03 0.96 325
>>>
>>> 21 1.62650  1.0  1665G  1371G   293G 82.35 1.13 377
>>>
>>> 22 1.62650  1.0  1665G  1110G   555G 66.67 0.91 341
>>>
>>> 23 1.62650  1.0  1665G  1221G   443G 73.36 1.00 327
>>>
>>> 16 1.62650  1.0  1665G  1354G   310G 81.34 1.11 352
>>>
>>> 17 1.62650  1.0  1665G  1250G   415G 75.06 1.03 341
>>>
>>> 18 1.62650  1.0  1665G  1179G   486G 70.80 0.97 316
>>>
>>> 19 1.62650  1.0  1665G  1236G   428G 74.26 1.01 333
>>>
>>> 24 1.62650  1.0  1665G  1146G   518G 68.86 0.94 325
>>>
>>> 25 1.62650  1.0  1665G  1033G   632G 62.02 0.85 309
>>>
>>> 26 1.62650  1.0  1665G  1234G   431G 74.11 1.01 334
>>>
>>> 27 1.62650  1.0  1665G  1342G   322G 80.62 1.10 352
>>>
>>>   TOTAL 46635G 34135G 12500G 73.20
>>>
>>> MIN/MAX VAR: 0.85/1.13  STDDEV: 5.28
>>>
>>>
>>>
>>> 发件人: Cary [mailto:dynamic.c...@gmail.com]
>>> 发送时间: 2017年12月26日 11:40
>>> 收件人: ? ? <cho...@msn.cn>
>>> 抄送: ceph-users@lists.ceph.com
>>> 主题: Re: [ceph-users] Can't delete file in cephfs with "No space left 
>>> on device"
>>>
>>>
>>>
>>> Could you post the output of “ceph osd df”?
>>>
>>>
>>> On Dec 25, 2017, at 19:46, ? ? <cho...@msn.cn> wrote:
>>>
>>> Hi all:
>>>
>>>
>>>
>>> Ceph version:
>>> ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)
>>>
>>>
>>>
>>> Ceph df:
>>>
>>> GLOBAL:
>>>
>>> SI

[ceph-users] 答复: 答复: Can't delete file in cephfs with "No space left on device"

2017-12-25 Thread
We don't use hardlink.
I reduced the mds_cache_size from 1000 to 200.
After that, the num_strays reduce to about 100k
The cluster is normal now. I think there is some bug about it.
Anyway, thanks for your reply!

-邮件原件-
发件人: Cary [mailto:dynamic.c...@gmail.com] 
发送时间: 2017年12月26日 14:08
收件人: 周 威 <cho...@msn.cn>
抄送: ceph-users@lists.ceph.com
主题: Re: 答复: [ceph-users] Can't delete file in cephfs with "No space left on 
device"

https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.ceph.com%2Fpipermail%2Fceph-users-ceph.com%2F2016-October%2F013646.html=02%7C01%7C%7C571893beb17d4d643a8e08d54c26fbed%7C84df9e7fe9f640afb435%7C1%7C0%7C636498652669125646=2Y9JT%2BoksSglveXPttiR7y3nwAmADhLwxTUoH4lxQAI%3D=0

On Tue, Dec 26, 2017 at 6:07 AM, Cary <dynamic.c...@gmail.com> wrote:
> Are you using hardlinks in cephfs?
>
>
> On Tue, Dec 26, 2017 at 3:42 AM, 周 威 <cho...@msn.cn> wrote:
>> The out put of ceph osd df
>>
>>
>>
>> ID WEIGHT  REWEIGHT SIZE   USEAVAIL  %USE  VAR  PGS
>>
>> 0 1.62650  1.0  1665G  1279G   386G 76.82 1.05 343
>>
>> 1 1.62650  1.0  1665G  1148G   516G 68.97 0.94 336
>>
>> 2 1.62650  1.0  1665G  1253G   411G 75.27 1.03 325
>>
>> 3 1.62650  1.0  1665G  1192G   472G 71.60 0.98 325
>>
>> 4 1.62650  1.0  1665G  1205G   460G 72.35 0.99 341
>>
>> 5 1.62650  1.0  1665G  1381G   283G 82.95 1.13 364
>>
>> 6 1.62650  1.0  1665G  1069G   595G 64.22 0.88 322
>>
>> 7 1.62650  1.0  1665G  1222G   443G 73.38 1.00 337
>>
>> 8 1.62650  1.0  1665G  1120G   544G 67.29 0.92 312
>>
>> 9 1.62650  1.0  1665G  1166G   498G 70.04 0.96 336
>>
>> 10 1.62650  1.0  1665G  1254G   411G 75.31 1.03 348
>>
>> 11 1.62650  1.0  1665G  1352G   313G 81.19 1.11 341
>>
>> 12 1.62650  1.0  1665G  1174G   490G 70.52 0.96 328
>>
>> 13 1.62650  1.0  1665G  1281G   383G 76.95 1.05 345
>>
>> 14 1.62650  1.0  1665G  1147G   518G 68.88 0.94 339
>>
>> 15 1.62650  1.0  1665G  1236G   429G 74.24 1.01 334
>>
>> 20 1.62650  1.0  1665G  1166G   499G 70.03 0.96 325
>>
>> 21 1.62650  1.0  1665G  1371G   293G 82.35 1.13 377
>>
>> 22 1.62650  1.0  1665G  1110G   555G 66.67 0.91 341
>>
>> 23 1.62650  1.0  1665G  1221G   443G 73.36 1.00 327
>>
>> 16 1.62650  1.0  1665G  1354G   310G 81.34 1.11 352
>>
>> 17 1.62650  1.0  1665G  1250G   415G 75.06 1.03 341
>>
>> 18 1.62650  1.0  1665G  1179G   486G 70.80 0.97 316
>>
>> 19 1.62650  1.0  1665G  1236G   428G 74.26 1.01 333
>>
>> 24 1.62650  1.0  1665G  1146G   518G 68.86 0.94 325
>>
>> 25 1.62650  1.0  1665G  1033G   632G 62.02 0.85 309
>>
>> 26 1.62650  1.0  1665G  1234G   431G 74.11 1.01 334
>>
>> 27 1.62650  1.0  1665G  1342G   322G 80.62 1.10 352
>>
>>   TOTAL 46635G 34135G 12500G 73.20
>>
>> MIN/MAX VAR: 0.85/1.13  STDDEV: 5.28
>>
>>
>>
>> 发件人: Cary [mailto:dynamic.c...@gmail.com]
>> 发送时间: 2017年12月26日 11:40
>> 收件人: ? ? <cho...@msn.cn>
>> 抄送: ceph-users@lists.ceph.com
>> 主题: Re: [ceph-users] Can't delete file in cephfs with "No space left 
>> on device"
>>
>>
>>
>> Could you post the output of “ceph osd df”?
>>
>>
>> On Dec 25, 2017, at 19:46, ? ? <cho...@msn.cn> wrote:
>>
>> Hi all:
>>
>>
>>
>> Ceph version:
>> ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)
>>
>>
>>
>> Ceph df:
>>
>> GLOBAL:
>>
>> SIZE   AVAIL  RAW USED %RAW USED
>>
>> 46635G 12500G   34135G 73.19
>>
>>
>>
>> rm d
>>
>> rm: cannot remove `d': No space left on device
>>
>>
>>
>> and mds_cache:
>>
>> {
>>
>> "mds_cache": {
>>
>> "num_strays": 999713,
>>
>> "num_strays_purging": 0,
>>
>> "num_strays_delayed": 0,
>>
>> "num_purge_ops": 0,
>>
>> "strays_created": 999723,
>>
>> "strays_purged": 10,
>>
>> "strays_reintegrated": 0,
>>
>> "strays_migrated": 0,
>>
>> "num_recovering_processing": 0,
>>
>> "num_recovering_enqueued": 0,
>>
>> "num_recovering_prioritized": 0,
>>
>> "recovery_started": 107,
>>
>> "recovery_completed": 107
>>
>> }
>>
>> }
>>
>>
>>
>> It seems starys num are stuck, what should I do?
>>
>> Thanks all.
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flist
>> s.ceph.com%2Flistinfo.cgi%2Fceph-users-ceph.com=02%7C01%7C%7C571
>> 893beb17d4d643a8e08d54c26fbed%7C84df9e7fe9f640afb435%7C1%
>> 7C0%7C636498652669125646=%2BUv%2BbpXBv%2B4INeEyjNLML1e%2BEjPMVH
>> k%2FD5iU8a7a1DA%3D=0
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 答复: Can't delete file in cephfs with "No space left on device"

2017-12-25 Thread
The out put of ceph osd df

ID WEIGHT  REWEIGHT SIZE   USEAVAIL  %USE  VAR  PGS
0 1.62650  1.0  1665G  1279G   386G 76.82 1.05 343
1 1.62650  1.0  1665G  1148G   516G 68.97 0.94 336
2 1.62650  1.0  1665G  1253G   411G 75.27 1.03 325
3 1.62650  1.0  1665G  1192G   472G 71.60 0.98 325
4 1.62650  1.0  1665G  1205G   460G 72.35 0.99 341
5 1.62650  1.0  1665G  1381G   283G 82.95 1.13 364
6 1.62650  1.0  1665G  1069G   595G 64.22 0.88 322
7 1.62650  1.0  1665G  1222G   443G 73.38 1.00 337
8 1.62650  1.0  1665G  1120G   544G 67.29 0.92 312
9 1.62650  1.0  1665G  1166G   498G 70.04 0.96 336
10 1.62650  1.0  1665G  1254G   411G 75.31 1.03 348
11 1.62650  1.0  1665G  1352G   313G 81.19 1.11 341
12 1.62650  1.0  1665G  1174G   490G 70.52 0.96 328
13 1.62650  1.0  1665G  1281G   383G 76.95 1.05 345
14 1.62650  1.0  1665G  1147G   518G 68.88 0.94 339
15 1.62650  1.0  1665G  1236G   429G 74.24 1.01 334
20 1.62650  1.0  1665G  1166G   499G 70.03 0.96 325
21 1.62650  1.0  1665G  1371G   293G 82.35 1.13 377
22 1.62650  1.0  1665G  1110G   555G 66.67 0.91 341
23 1.62650  1.0  1665G  1221G   443G 73.36 1.00 327
16 1.62650  1.0  1665G  1354G   310G 81.34 1.11 352
17 1.62650  1.0  1665G  1250G   415G 75.06 1.03 341
18 1.62650  1.0  1665G  1179G   486G 70.80 0.97 316
19 1.62650  1.0  1665G  1236G   428G 74.26 1.01 333
24 1.62650  1.0  1665G  1146G   518G 68.86 0.94 325
25 1.62650  1.0  1665G  1033G   632G 62.02 0.85 309
26 1.62650  1.0  1665G  1234G   431G 74.11 1.01 334
27 1.62650  1.0  1665G  1342G   322G 80.62 1.10 352
  TOTAL 46635G 34135G 12500G 73.20
MIN/MAX VAR: 0.85/1.13  STDDEV: 5.28

发件人: Cary [mailto:dynamic.c...@gmail.com]
发送时间: 2017年12月26日 11:40
收件人: ? ? 
抄送: ceph-users@lists.ceph.com
主题: Re: [ceph-users] Can't delete file in cephfs with "No space left on device"

Could you post the output of “ceph osd df”?

On Dec 25, 2017, at 19:46, ? ? > wrote:
Hi all:

Ceph version:
ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)

Ceph df:
GLOBAL:
SIZE   AVAIL  RAW USED %RAW USED
46635G 12500G   34135G 73.19

rm d
rm: cannot remove `d': No space left on device

and mds_cache:
{
"mds_cache": {
"num_strays": 999713,
"num_strays_purging": 0,
"num_strays_delayed": 0,
"num_purge_ops": 0,
"strays_created": 999723,
"strays_purged": 10,
"strays_reintegrated": 0,
"strays_migrated": 0,
"num_recovering_processing": 0,
"num_recovering_enqueued": 0,
"num_recovering_prioritized": 0,
"recovery_started": 107,
"recovery_completed": 107
}
}

It seems starys num are stuck, what should I do?
Thanks all.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 答复: 答复: Where can I find the fix commit of #3370 ?

2017-11-13 Thread
Hi, Ilya

The kernel version is 3.10.106.
Part of dmesg related to ceph:
[7349718.004905] libceph: osd297 down
[7349718.005190] libceph: osd299 down
[7349785.671015] libceph: osd295 down
[7350006.357509] libceph: osd291 weight 0x0 (out)
[7350006.357795] libceph: osd292 weight 0x0 (out)
[7350006.358075] libceph: osd293 weight 0x0 (out)
[7350006.358356] libceph: osd294 weight 0x0 (out)
[7350013.312399] libceph: osd289 weight 0x0 (out)
[7350013.312683] libceph: osd290 weight 0x0 (out)
[7350013.312964] libceph: osd296 weight 0x0 (out)
[7350013.313244] libceph: osd298 weight 0x0 (out)
[7350023.322571] libceph: osd288 weight 0x0 (out)
[7350038.338217] libceph: osd297 weight 0x0 (out)
[7350038.338501] libceph: osd299 weight 0x0 (out)
[7350115.364496] libceph: osd295 weight 0x0 (out)
[7350179.683200] libceph: osd294 weight 0x1 (in)
[7350179.683495] libceph: osd294 up
[7350193.654197] libceph: osd293 weight 0x1 (in)
[7350193.654486] libceph: osd297 weight 0x1 (in)
[7350193.654769] libceph: osd293 up
[7350193.655046] libceph: osd297 up
[7350228.750112] libceph: osd299 weight 0x1 (in)
[7350228.750399] libceph: osd299 up
[7350255.739415] libceph: osd289 weight 0x1 (in)
[7350255.739700] libceph: osd289 up
[7350268.578031] libceph: osd288 weight 0x1 (in)
[7350268.578315] libceph: osd288 up
[7383411.866068] libceph: osd299 down
[7383558.405675] libceph: osd299 up
[7383411.866068] libceph: osd299 down
[7383558.405675] libceph: osd299 up
[7387106.574308] libceph: osd291 weight 0x1 (in)
[7387106.574593] libceph: osd291 up
[7387124.168198] libceph: osd296 weight 0x1 (in)
[7387124.168492] libceph: osd296 up
[7387131.732934] libceph: osd292 weight 0x1 (in)
[7387131.733218] libceph: osd292 up
[7387131.741277] libceph: osd290 weight 0x1 (in)
[7387131.741558] libceph: osd290 up
[7387149.788781] libceph: osd298 weight 0x1 (in)
[7387149.789066] libceph: osd298 up

A node of osds restart some days before.
And after evict session:
[7679890.147116] libceph: mds0 x.x.x.x:6800 socket closed (con state OPEN)
[7679890.491439] libceph: mds0 x.x.x.x:6800 connection reset
[7679890.491727] libceph: reset on mds0
[7679890.492006] ceph: mds0 closed our session
[7679890.492286] ceph: mds0 reconnect start
[7679910.479911] ceph: mds0 caps stale
[7679927.886621] ceph: mds0 reconnect denied

We have to restart the machine to recovery it.
I will send you an email if it happen again.

Thanks for your reply.

-邮件原件-
发件人: Ilya Dryomov [mailto:idryo...@gmail.com] 
发送时间: 2017年11月13日 17:30
收件人: 周 威 <cho...@msn.cn>
抄送: ceph-users@lists.ceph.com
主题: Re: 答复: [ceph-users] Where can I find the fix commit of #3370 ?

On Mon, Nov 13, 2017 at 10:18 AM, 周 威 <cho...@msn.cn> wrote:
> Hi, Ilya
>
> I'm using the kernel of centos 7, should be 3.10 I checked the patch, 
> and it is appears in my kernel source.
> We got the same stack of #3370, the process is hung in sleep_on_page_killable.
> The debugs/ceph/osdc shows there is a read request are waiting response, 
> while the command `ceph daemon osd.x ops` shows nothing.
> Evict the session from mds does not help.
> The version of ceph cluster is 10.2.9.

I don't think it's related to that ticket.

Which version of centos 7?  Can you provide dmesg?

Is it reproducible?  A debug ms = 1 log for that OSD would help with narrowing 
this down.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 答复: Where can I find the fix commit of #3370 ?

2017-11-13 Thread
Hi, Ilya

I'm using the kernel of centos 7, should be 3.10
I checked the patch, and it is appears in my kernel source.
We got the same stack of #3370, the process is hung in sleep_on_page_killable.
The debugs/ceph/osdc shows there is a read request are waiting response, while 
the command `ceph daemon osd.x ops` shows nothing.
Evict the session from mds does not help.
The version of ceph cluster is 10.2.9.

Thanks for reply.

-邮件原件-
发件人: Ilya Dryomov [mailto:idryo...@gmail.com] 
发送时间: 2017年11月13日 16:59
收件人: ? ? 
抄送: ceph-users@lists.ceph.com
主题: Re: [ceph-users] Where can I find the fix commit of #3370 ?

On Mon, Nov 13, 2017 at 7:45 AM, ? ?  wrote:
> I met the same issue as 
> https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Ftrack
> er.ceph.com%2Fissues%2F3370=02%7C01%7Cchoury%40msn.cn%7C04153e5ed
> c1c415fecad08d52a74c829%7C84df9e7fe9f640afb435%7C1%7C0%7C6
> 36461603408918629=XyQJ2UojB960pTv0T3UT8%2B7bBCvJEeL9Te7JCZyNQXM%
> 3D=0 ,
>
> But I can’t find the commit id of 
> 2978257c56935878f8a756c6cb169b569e99bb91 , Can someone help me?

I updated the ticket.  It's very old though, which kernel are you running?

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] I can't create new pool in my cluster.

2017-02-09 Thread
The version I'm using is 0.94.9

And when I want to create a pool, It shows:

Error EINVAL: error running crushmap through crushtool: (1) Operation
not permitted

What's wrong about this?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com