We don't use hardlink.
I reduced the mds_cache_size from 1000 to 200.
After that, the num_strays reduce to about 100k
The cluster is normal now. I think there is some bug about it.
Anyway, thanks for your reply!
-邮件原件-
发件人: Cary [mailto:dynamic.c...@gmail.com]
发送时间: 2017年12月26日
Are you using hardlinks in cephfs?
On Tue, Dec 26, 2017 at 3:42 AM, 周 威 wrote:
> The out put of ceph osd df
>
>
>
> ID WEIGHT REWEIGHT SIZE USEAVAIL %USE VAR PGS
>
> 0 1.62650 1.0 1665G 1279G 386G 76.82 1.05 343
>
> 1 1.62650 1.0 1665G 1148G 516G 68.97
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013646.html
On Tue, Dec 26, 2017 at 6:07 AM, Cary wrote:
> Are you using hardlinks in cephfs?
>
>
> On Tue, Dec 26, 2017 at 3:42 AM, 周 威 wrote:
>> The out put of ceph osd df
>>
>>
>>
>> ID
The out put of ceph osd df
ID WEIGHT REWEIGHT SIZE USEAVAIL %USE VAR PGS
0 1.62650 1.0 1665G 1279G 386G 76.82 1.05 343
1 1.62650 1.0 1665G 1148G 516G 68.97 0.94 336
2 1.62650 1.0 1665G 1253G 411G 75.27 1.03 325
3 1.62650 1.0 1665G 1192G 472G 71.60
Could you post the output of “ceph osd df”?
On Dec 25, 2017, at 19:46, ? ? wrote:
Hi all:
Ceph version:
ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)
Ceph df:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
46635G 12500G 34135G
Hi,
We recently started to test bluestore with huge amount of small files
(only dozens of bytes per file). We have 22 OSDs in a test cluster
using ceph-12.2.1 with 2 replicas and each OSD disk is 2TB size. After
we wrote about 150 million files through cephfs, we found each OSD
disk usage
Hi all:
Ceph version:
ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)
Ceph df:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
46635G 12500G 34135G 73.19
rm d
rm: cannot remove `d': No space left on device
and mds_cache:
{
"mds_cache": {
Hello folks,
I am trying to share my ceph rbd images through iscsi protocol.
I am trying iscsi-gateway
http://docs.ceph.com/docs/master/rbd/iscsi-overview/
now
systemctl start rbd-target-api
is working and I could run gwcli
(at a CentOS 7.4 osd node)
gwcli
/> ls
o- /