[ceph-users] 答复: 答复: Can't delete file in cephfs with "No space left on device"

2017-12-25 Thread 周 威
We don't use hardlink.
I reduced the mds_cache_size from 1000 to 200.
After that, the num_strays reduce to about 100k
The cluster is normal now. I think there is some bug about it.
Anyway, thanks for your reply!

-邮件原件-
发件人: Cary [mailto:dynamic.c...@gmail.com] 
发送时间: 2017年12月26日 14:08
收件人: 周 威 
抄送: ceph-users@lists.ceph.com
主题: Re: 答复: [ceph-users] Can't delete file in cephfs with "No space left on 
device"

https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.ceph.com%2Fpipermail%2Fceph-users-ceph.com%2F2016-October%2F013646.html=02%7C01%7C%7C571893beb17d4d643a8e08d54c26fbed%7C84df9e7fe9f640afb435%7C1%7C0%7C636498652669125646=2Y9JT%2BoksSglveXPttiR7y3nwAmADhLwxTUoH4lxQAI%3D=0

On Tue, Dec 26, 2017 at 6:07 AM, Cary  wrote:
> Are you using hardlinks in cephfs?
>
>
> On Tue, Dec 26, 2017 at 3:42 AM, 周 威  wrote:
>> The out put of ceph osd df
>>
>>
>>
>> ID WEIGHT  REWEIGHT SIZE   USEAVAIL  %USE  VAR  PGS
>>
>> 0 1.62650  1.0  1665G  1279G   386G 76.82 1.05 343
>>
>> 1 1.62650  1.0  1665G  1148G   516G 68.97 0.94 336
>>
>> 2 1.62650  1.0  1665G  1253G   411G 75.27 1.03 325
>>
>> 3 1.62650  1.0  1665G  1192G   472G 71.60 0.98 325
>>
>> 4 1.62650  1.0  1665G  1205G   460G 72.35 0.99 341
>>
>> 5 1.62650  1.0  1665G  1381G   283G 82.95 1.13 364
>>
>> 6 1.62650  1.0  1665G  1069G   595G 64.22 0.88 322
>>
>> 7 1.62650  1.0  1665G  1222G   443G 73.38 1.00 337
>>
>> 8 1.62650  1.0  1665G  1120G   544G 67.29 0.92 312
>>
>> 9 1.62650  1.0  1665G  1166G   498G 70.04 0.96 336
>>
>> 10 1.62650  1.0  1665G  1254G   411G 75.31 1.03 348
>>
>> 11 1.62650  1.0  1665G  1352G   313G 81.19 1.11 341
>>
>> 12 1.62650  1.0  1665G  1174G   490G 70.52 0.96 328
>>
>> 13 1.62650  1.0  1665G  1281G   383G 76.95 1.05 345
>>
>> 14 1.62650  1.0  1665G  1147G   518G 68.88 0.94 339
>>
>> 15 1.62650  1.0  1665G  1236G   429G 74.24 1.01 334
>>
>> 20 1.62650  1.0  1665G  1166G   499G 70.03 0.96 325
>>
>> 21 1.62650  1.0  1665G  1371G   293G 82.35 1.13 377
>>
>> 22 1.62650  1.0  1665G  1110G   555G 66.67 0.91 341
>>
>> 23 1.62650  1.0  1665G  1221G   443G 73.36 1.00 327
>>
>> 16 1.62650  1.0  1665G  1354G   310G 81.34 1.11 352
>>
>> 17 1.62650  1.0  1665G  1250G   415G 75.06 1.03 341
>>
>> 18 1.62650  1.0  1665G  1179G   486G 70.80 0.97 316
>>
>> 19 1.62650  1.0  1665G  1236G   428G 74.26 1.01 333
>>
>> 24 1.62650  1.0  1665G  1146G   518G 68.86 0.94 325
>>
>> 25 1.62650  1.0  1665G  1033G   632G 62.02 0.85 309
>>
>> 26 1.62650  1.0  1665G  1234G   431G 74.11 1.01 334
>>
>> 27 1.62650  1.0  1665G  1342G   322G 80.62 1.10 352
>>
>>   TOTAL 46635G 34135G 12500G 73.20
>>
>> MIN/MAX VAR: 0.85/1.13  STDDEV: 5.28
>>
>>
>>
>> 发件人: Cary [mailto:dynamic.c...@gmail.com]
>> 发送时间: 2017年12月26日 11:40
>> 收件人: ? ? 
>> 抄送: ceph-users@lists.ceph.com
>> 主题: Re: [ceph-users] Can't delete file in cephfs with "No space left 
>> on device"
>>
>>
>>
>> Could you post the output of “ceph osd df”?
>>
>>
>> On Dec 25, 2017, at 19:46, ? ?  wrote:
>>
>> Hi all:
>>
>>
>>
>> Ceph version:
>> ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)
>>
>>
>>
>> Ceph df:
>>
>> GLOBAL:
>>
>> SIZE   AVAIL  RAW USED %RAW USED
>>
>> 46635G 12500G   34135G 73.19
>>
>>
>>
>> rm d
>>
>> rm: cannot remove `d': No space left on device
>>
>>
>>
>> and mds_cache:
>>
>> {
>>
>> "mds_cache": {
>>
>> "num_strays": 999713,
>>
>> "num_strays_purging": 0,
>>
>> "num_strays_delayed": 0,
>>
>> "num_purge_ops": 0,
>>
>> "strays_created": 999723,
>>
>> "strays_purged": 10,
>>
>> "strays_reintegrated": 0,
>>
>> "strays_migrated": 0,
>>
>> "num_recovering_processing": 0,
>>
>> "num_recovering_enqueued": 0,
>>
>> "num_recovering_prioritized": 0,
>>
>> "recovery_started": 107,
>>
>> "recovery_completed": 107
>>
>> }
>>
>> }
>>
>>
>>
>> It seems starys num are stuck, what should I do?
>>
>> Thanks all.
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flist
>> s.ceph.com%2Flistinfo.cgi%2Fceph-users-ceph.com=02%7C01%7C%7C571
>> 893beb17d4d643a8e08d54c26fbed%7C84df9e7fe9f640afb435%7C1%
>> 7C0%7C636498652669125646=%2BUv%2BbpXBv%2B4INeEyjNLML1e%2BEjPMVH
>> k%2FD5iU8a7a1DA%3D=0
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 答复: Can't delete file in cephfs with "No space left on device"

2017-12-25 Thread Cary
Are you using hardlinks in cephfs?


On Tue, Dec 26, 2017 at 3:42 AM, 周 威  wrote:
> The out put of ceph osd df
>
>
>
> ID WEIGHT  REWEIGHT SIZE   USEAVAIL  %USE  VAR  PGS
>
> 0 1.62650  1.0  1665G  1279G   386G 76.82 1.05 343
>
> 1 1.62650  1.0  1665G  1148G   516G 68.97 0.94 336
>
> 2 1.62650  1.0  1665G  1253G   411G 75.27 1.03 325
>
> 3 1.62650  1.0  1665G  1192G   472G 71.60 0.98 325
>
> 4 1.62650  1.0  1665G  1205G   460G 72.35 0.99 341
>
> 5 1.62650  1.0  1665G  1381G   283G 82.95 1.13 364
>
> 6 1.62650  1.0  1665G  1069G   595G 64.22 0.88 322
>
> 7 1.62650  1.0  1665G  1222G   443G 73.38 1.00 337
>
> 8 1.62650  1.0  1665G  1120G   544G 67.29 0.92 312
>
> 9 1.62650  1.0  1665G  1166G   498G 70.04 0.96 336
>
> 10 1.62650  1.0  1665G  1254G   411G 75.31 1.03 348
>
> 11 1.62650  1.0  1665G  1352G   313G 81.19 1.11 341
>
> 12 1.62650  1.0  1665G  1174G   490G 70.52 0.96 328
>
> 13 1.62650  1.0  1665G  1281G   383G 76.95 1.05 345
>
> 14 1.62650  1.0  1665G  1147G   518G 68.88 0.94 339
>
> 15 1.62650  1.0  1665G  1236G   429G 74.24 1.01 334
>
> 20 1.62650  1.0  1665G  1166G   499G 70.03 0.96 325
>
> 21 1.62650  1.0  1665G  1371G   293G 82.35 1.13 377
>
> 22 1.62650  1.0  1665G  1110G   555G 66.67 0.91 341
>
> 23 1.62650  1.0  1665G  1221G   443G 73.36 1.00 327
>
> 16 1.62650  1.0  1665G  1354G   310G 81.34 1.11 352
>
> 17 1.62650  1.0  1665G  1250G   415G 75.06 1.03 341
>
> 18 1.62650  1.0  1665G  1179G   486G 70.80 0.97 316
>
> 19 1.62650  1.0  1665G  1236G   428G 74.26 1.01 333
>
> 24 1.62650  1.0  1665G  1146G   518G 68.86 0.94 325
>
> 25 1.62650  1.0  1665G  1033G   632G 62.02 0.85 309
>
> 26 1.62650  1.0  1665G  1234G   431G 74.11 1.01 334
>
> 27 1.62650  1.0  1665G  1342G   322G 80.62 1.10 352
>
>   TOTAL 46635G 34135G 12500G 73.20
>
> MIN/MAX VAR: 0.85/1.13  STDDEV: 5.28
>
>
>
> 发件人: Cary [mailto:dynamic.c...@gmail.com]
> 发送时间: 2017年12月26日 11:40
> 收件人: ? ? 
> 抄送: ceph-users@lists.ceph.com
> 主题: Re: [ceph-users] Can't delete file in cephfs with "No space left on
> device"
>
>
>
> Could you post the output of “ceph osd df”?
>
>
> On Dec 25, 2017, at 19:46, ? ?  wrote:
>
> Hi all:
>
>
>
> Ceph version:
> ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)
>
>
>
> Ceph df:
>
> GLOBAL:
>
> SIZE   AVAIL  RAW USED %RAW USED
>
> 46635G 12500G   34135G 73.19
>
>
>
> rm d
>
> rm: cannot remove `d': No space left on device
>
>
>
> and mds_cache:
>
> {
>
> "mds_cache": {
>
> "num_strays": 999713,
>
> "num_strays_purging": 0,
>
> "num_strays_delayed": 0,
>
> "num_purge_ops": 0,
>
> "strays_created": 999723,
>
> "strays_purged": 10,
>
> "strays_reintegrated": 0,
>
> "strays_migrated": 0,
>
> "num_recovering_processing": 0,
>
> "num_recovering_enqueued": 0,
>
> "num_recovering_prioritized": 0,
>
> "recovery_started": 107,
>
> "recovery_completed": 107
>
> }
>
> }
>
>
>
> It seems starys num are stuck, what should I do?
>
> Thanks all.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 答复: Can't delete file in cephfs with "No space left on device"

2017-12-25 Thread Cary
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013646.html

On Tue, Dec 26, 2017 at 6:07 AM, Cary  wrote:
> Are you using hardlinks in cephfs?
>
>
> On Tue, Dec 26, 2017 at 3:42 AM, 周 威  wrote:
>> The out put of ceph osd df
>>
>>
>>
>> ID WEIGHT  REWEIGHT SIZE   USEAVAIL  %USE  VAR  PGS
>>
>> 0 1.62650  1.0  1665G  1279G   386G 76.82 1.05 343
>>
>> 1 1.62650  1.0  1665G  1148G   516G 68.97 0.94 336
>>
>> 2 1.62650  1.0  1665G  1253G   411G 75.27 1.03 325
>>
>> 3 1.62650  1.0  1665G  1192G   472G 71.60 0.98 325
>>
>> 4 1.62650  1.0  1665G  1205G   460G 72.35 0.99 341
>>
>> 5 1.62650  1.0  1665G  1381G   283G 82.95 1.13 364
>>
>> 6 1.62650  1.0  1665G  1069G   595G 64.22 0.88 322
>>
>> 7 1.62650  1.0  1665G  1222G   443G 73.38 1.00 337
>>
>> 8 1.62650  1.0  1665G  1120G   544G 67.29 0.92 312
>>
>> 9 1.62650  1.0  1665G  1166G   498G 70.04 0.96 336
>>
>> 10 1.62650  1.0  1665G  1254G   411G 75.31 1.03 348
>>
>> 11 1.62650  1.0  1665G  1352G   313G 81.19 1.11 341
>>
>> 12 1.62650  1.0  1665G  1174G   490G 70.52 0.96 328
>>
>> 13 1.62650  1.0  1665G  1281G   383G 76.95 1.05 345
>>
>> 14 1.62650  1.0  1665G  1147G   518G 68.88 0.94 339
>>
>> 15 1.62650  1.0  1665G  1236G   429G 74.24 1.01 334
>>
>> 20 1.62650  1.0  1665G  1166G   499G 70.03 0.96 325
>>
>> 21 1.62650  1.0  1665G  1371G   293G 82.35 1.13 377
>>
>> 22 1.62650  1.0  1665G  1110G   555G 66.67 0.91 341
>>
>> 23 1.62650  1.0  1665G  1221G   443G 73.36 1.00 327
>>
>> 16 1.62650  1.0  1665G  1354G   310G 81.34 1.11 352
>>
>> 17 1.62650  1.0  1665G  1250G   415G 75.06 1.03 341
>>
>> 18 1.62650  1.0  1665G  1179G   486G 70.80 0.97 316
>>
>> 19 1.62650  1.0  1665G  1236G   428G 74.26 1.01 333
>>
>> 24 1.62650  1.0  1665G  1146G   518G 68.86 0.94 325
>>
>> 25 1.62650  1.0  1665G  1033G   632G 62.02 0.85 309
>>
>> 26 1.62650  1.0  1665G  1234G   431G 74.11 1.01 334
>>
>> 27 1.62650  1.0  1665G  1342G   322G 80.62 1.10 352
>>
>>   TOTAL 46635G 34135G 12500G 73.20
>>
>> MIN/MAX VAR: 0.85/1.13  STDDEV: 5.28
>>
>>
>>
>> 发件人: Cary [mailto:dynamic.c...@gmail.com]
>> 发送时间: 2017年12月26日 11:40
>> 收件人: ? ? 
>> 抄送: ceph-users@lists.ceph.com
>> 主题: Re: [ceph-users] Can't delete file in cephfs with "No space left on
>> device"
>>
>>
>>
>> Could you post the output of “ceph osd df”?
>>
>>
>> On Dec 25, 2017, at 19:46, ? ?  wrote:
>>
>> Hi all:
>>
>>
>>
>> Ceph version:
>> ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)
>>
>>
>>
>> Ceph df:
>>
>> GLOBAL:
>>
>> SIZE   AVAIL  RAW USED %RAW USED
>>
>> 46635G 12500G   34135G 73.19
>>
>>
>>
>> rm d
>>
>> rm: cannot remove `d': No space left on device
>>
>>
>>
>> and mds_cache:
>>
>> {
>>
>> "mds_cache": {
>>
>> "num_strays": 999713,
>>
>> "num_strays_purging": 0,
>>
>> "num_strays_delayed": 0,
>>
>> "num_purge_ops": 0,
>>
>> "strays_created": 999723,
>>
>> "strays_purged": 10,
>>
>> "strays_reintegrated": 0,
>>
>> "strays_migrated": 0,
>>
>> "num_recovering_processing": 0,
>>
>> "num_recovering_enqueued": 0,
>>
>> "num_recovering_prioritized": 0,
>>
>> "recovery_started": 107,
>>
>> "recovery_completed": 107
>>
>> }
>>
>> }
>>
>>
>>
>> It seems starys num are stuck, what should I do?
>>
>> Thanks all.
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 答复: Can't delete file in cephfs with "No space left on device"

2017-12-25 Thread 周 威
The out put of ceph osd df

ID WEIGHT  REWEIGHT SIZE   USEAVAIL  %USE  VAR  PGS
0 1.62650  1.0  1665G  1279G   386G 76.82 1.05 343
1 1.62650  1.0  1665G  1148G   516G 68.97 0.94 336
2 1.62650  1.0  1665G  1253G   411G 75.27 1.03 325
3 1.62650  1.0  1665G  1192G   472G 71.60 0.98 325
4 1.62650  1.0  1665G  1205G   460G 72.35 0.99 341
5 1.62650  1.0  1665G  1381G   283G 82.95 1.13 364
6 1.62650  1.0  1665G  1069G   595G 64.22 0.88 322
7 1.62650  1.0  1665G  1222G   443G 73.38 1.00 337
8 1.62650  1.0  1665G  1120G   544G 67.29 0.92 312
9 1.62650  1.0  1665G  1166G   498G 70.04 0.96 336
10 1.62650  1.0  1665G  1254G   411G 75.31 1.03 348
11 1.62650  1.0  1665G  1352G   313G 81.19 1.11 341
12 1.62650  1.0  1665G  1174G   490G 70.52 0.96 328
13 1.62650  1.0  1665G  1281G   383G 76.95 1.05 345
14 1.62650  1.0  1665G  1147G   518G 68.88 0.94 339
15 1.62650  1.0  1665G  1236G   429G 74.24 1.01 334
20 1.62650  1.0  1665G  1166G   499G 70.03 0.96 325
21 1.62650  1.0  1665G  1371G   293G 82.35 1.13 377
22 1.62650  1.0  1665G  1110G   555G 66.67 0.91 341
23 1.62650  1.0  1665G  1221G   443G 73.36 1.00 327
16 1.62650  1.0  1665G  1354G   310G 81.34 1.11 352
17 1.62650  1.0  1665G  1250G   415G 75.06 1.03 341
18 1.62650  1.0  1665G  1179G   486G 70.80 0.97 316
19 1.62650  1.0  1665G  1236G   428G 74.26 1.01 333
24 1.62650  1.0  1665G  1146G   518G 68.86 0.94 325
25 1.62650  1.0  1665G  1033G   632G 62.02 0.85 309
26 1.62650  1.0  1665G  1234G   431G 74.11 1.01 334
27 1.62650  1.0  1665G  1342G   322G 80.62 1.10 352
  TOTAL 46635G 34135G 12500G 73.20
MIN/MAX VAR: 0.85/1.13  STDDEV: 5.28

发件人: Cary [mailto:dynamic.c...@gmail.com]
发送时间: 2017年12月26日 11:40
收件人: ? ? 
抄送: ceph-users@lists.ceph.com
主题: Re: [ceph-users] Can't delete file in cephfs with "No space left on device"

Could you post the output of “ceph osd df”?

On Dec 25, 2017, at 19:46, ? ? > wrote:
Hi all:

Ceph version:
ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)

Ceph df:
GLOBAL:
SIZE   AVAIL  RAW USED %RAW USED
46635G 12500G   34135G 73.19

rm d
rm: cannot remove `d': No space left on device

and mds_cache:
{
"mds_cache": {
"num_strays": 999713,
"num_strays_purging": 0,
"num_strays_delayed": 0,
"num_purge_ops": 0,
"strays_created": 999723,
"strays_purged": 10,
"strays_reintegrated": 0,
"strays_migrated": 0,
"num_recovering_processing": 0,
"num_recovering_enqueued": 0,
"num_recovering_prioritized": 0,
"recovery_started": 107,
"recovery_completed": 107
}
}

It seems starys num are stuck, what should I do?
Thanks all.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Can't delete file in cephfs with "No space left on device"

2017-12-25 Thread Cary
Could you post the output of “ceph osd df”?

On Dec 25, 2017, at 19:46, ? ?  wrote:

Hi all:
 
Ceph version:
ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)
 
Ceph df:
GLOBAL:
SIZE   AVAIL  RAW USED %RAW USED
46635G 12500G   34135G 73.19
 
rm d
rm: cannot remove `d': No space left on device
 
and mds_cache:
{
"mds_cache": {
"num_strays": 999713,
"num_strays_purging": 0,
"num_strays_delayed": 0,
"num_purge_ops": 0,
"strays_created": 999723,
"strays_purged": 10,
"strays_reintegrated": 0,
"strays_migrated": 0,
"num_recovering_processing": 0,
"num_recovering_enqueued": 0,
"num_recovering_prioritized": 0,
"recovery_started": 107,
"recovery_completed": 107
}
}
 
It seems starys num are stuck, what should I do?
Thanks all.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Bluestore: inaccurate disk usage statistics problem?

2017-12-25 Thread Zhi Zhang
Hi,

We recently started to test bluestore with huge amount of small files
(only dozens of bytes per file). We have 22 OSDs in a test cluster
using ceph-12.2.1 with 2 replicas and each OSD disk is 2TB size. After
we wrote about 150 million files through cephfs, we found each OSD
disk usage reported by "ceph osd df" was more than 40%, which meant
more than 800GB was used for each disk, but the actual total file size
was only about 5.2 GB, which was reported by "ceph df" and also
calculated by ourselves.

The test is ongoing. I wonder whether the cluster would report OSD
full after we wrote about 300 million files, however the actual total
file size would be far far less than the disk usage. I will update the
result when the test is done.

My question is, whether the disk usage statistics in bluestore is
inaccurate, or the padding, alignment stuff or something else in
bluestore wastes the disk space?

Thanks!

$ ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE   USEAVAIL  %USE  VAR  PGS
 0   hdd 1.49728  1.0  1862G   853G  1009G 45.82 1.00 110
 1   hdd 1.69193  1.0  1862G   807G  1054G 43.37 0.94 105
 2   hdd 1.81929  1.0  1862G   811G  1051G 43.57 0.95 116
 3   hdd 2.00700  1.0  1862G   839G  1023G 45.04 0.98 122
 4   hdd 2.06334  1.0  1862G   886G   976G 47.58 1.03 130
 5   hdd 1.99051  1.0  1862G   856G  1006G 45.95 1.00 118
 6   hdd 1.67519  1.0  1862G   881G   981G 47.32 1.03 114
 7   hdd 1.81929  1.0  1862G   874G   988G 46.94 1.02 120
 8   hdd 2.08881  1.0  1862G   885G   976G 47.56 1.03 130
 9   hdd 1.64265  1.0  1862G   852G  1010G 45.78 0.99 106
10   hdd 1.81929  1.0  1862G   873G   989G 46.88 1.02 109
11   hdd 2.20041  1.0  1862G   915G   947G 49.13 1.07 131
12   hdd 1.45694  1.0  1862G   874G   988G 46.94 1.02 110
13   hdd 2.03847  1.0  1862G   821G  1041G 44.08 0.96 113
14   hdd 1.53812  1.0  1862G   810G  1052G 43.50 0.95 112
15   hdd 1.52914  1.0  1862G   874G   988G 46.94 1.02 111
16   hdd 1.99176  1.0  1862G   810G  1052G 43.51 0.95 114
17   hdd 1.81929  1.0  1862G   841G  1021G 45.16 0.98 119
18   hdd 1.70901  1.0  1862G   831G  1031G 44.61 0.97 113
19   hdd 1.67519  1.0  1862G   875G   987G 47.02 1.02 115
20   hdd 2.03847  1.0  1862G   864G   998G 46.39 1.01 115
21   hdd 2.18794  1.0  1862G   920G   942G 49.39 1.07 127
TOTAL 40984G 18861G 22122G 46.02

$ ceph df
GLOBAL:
SIZE   AVAIL  RAW USED %RAW USED
40984G 22122G   18861G 46.02
POOLS:
NAMEID USED  %USED MAX AVAIL OBJECTS
cephfs_metadata 5   160M 0 6964G 77342
cephfs_data 6  5193M  0.04 6964G 151292669


Regards,
Zhi Zhang (David)
Contact: zhang.david2...@gmail.com
  zhangz.da...@outlook.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Can't delete file in cephfs with "No space left on device"

2017-12-25 Thread ? ?
Hi all:

Ceph version:
ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)

Ceph df:
GLOBAL:
SIZE   AVAIL  RAW USED %RAW USED
46635G 12500G   34135G 73.19

rm d
rm: cannot remove `d': No space left on device

and mds_cache:
{
"mds_cache": {
"num_strays": 999713,
"num_strays_purging": 0,
"num_strays_delayed": 0,
"num_purge_ops": 0,
"strays_created": 999723,
"strays_purged": 10,
"strays_reintegrated": 0,
"strays_migrated": 0,
"num_recovering_processing": 0,
"num_recovering_enqueued": 0,
"num_recovering_prioritized": 0,
"recovery_started": 107,
"recovery_completed": 107
}
}

It seems starys num are stuck, what should I do?
Thanks all.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] iSCSI over RBD

2017-12-25 Thread Joshua Chen
Hello folks,
  I am trying to share my ceph rbd images through iscsi protocol.

I am trying iscsi-gateway
http://docs.ceph.com/docs/master/rbd/iscsi-overview/


now

systemctl start rbd-target-api
is working and I could run gwcli
(at a CentOS 7.4 osd node)

gwcli
/> ls
o- /
.
[...]
  o- clusters

[Clusters: 1]
  | o- ceph

[HEALTH_OK]
  |   o- pools
..
[Pools: 1]
  |   | o- rbd
...
[(x3), Commit: 0b/25.9T (0%), Used: 395M]
  |   o- topology

[OSDs: 9,MONs: 3]
  o- disks
..
[0b, Disks: 0]
  o- iscsi-target
.
[Targets: 0]


but when I created iscsi-target, I got

Local LIO instance already has LIO configured with a target - unable to
continue


/> /iscsi-target create
iqn.2003-01.org.linux-iscsi.ceph-node1.x8664:sn.571e1ab51af2
Local LIO instance already has LIO configured with a target - unable to
continue
/>

and no more progress at all,

is there something I need to check? something missing? or please direct me
for further debugging.

Thanks in advance

Cheers
Joshua
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com