Re: [ceph-users] rm: cannot remove dir and files (cephfs)

2018-02-21 Thread Deepak Naidu
>> rm: cannot remove '/design/4695/8/6-50kb.jpg': No space left on device
“No space left on device” issue typically in ceph FS might be caused if you 
have files > 1million(10) in “single directory”. To mitigate this try 
increasing the "mds_bal_fragment_size_max" to a higher value, example 7 million 
(70)

[mds]
mds_bal_fragment_size_max = 70

I am not going detail here, that said, there are many other tuning params 
including enabling dir frag, multiple MDS etc. It seems your CEPH version is 
10.2.10(Jewel). Luminous(12.2.x) support better with multiple MDS and dir_frag 
etc. Jewel some of these options might be experimental features you might need 
todo bit reading related to cephFS.  Just a note of advise based on my 
experience, cephFS is ideal for large files in MB/GB is not great for small 
files(in kbs). Also split your files into multiple sub-dirs to avoid the 
similar issue or any likewise.

--
Deepak



From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of ??
Sent: Friday, February 09, 2018 2:35 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] rm: cannot remove dir and files (cephfs)

ceph version 10.2.10

ceph -s
cluster 97f833aa-cc6f-41d5-bf82-bda5c09fd664
health HEALTH_OK
monmap e3: 3 mons at 
{web23=192.168.65.24:6789/0,web25=192.168.65.55:6789/0,web26=192.168.65.56:6789/0}
election epoch 1198, quorum 0,1,2 web23,web25,web26
fsmap e464: 1/1/1 up {0=web26=up:active}, 3 up:standby
osdmap e325: 4 osds: 4 up, 4 in
flags sortbitwise,require_jewel_osds
pgmap v42475: 128 pgs, 3 pools, 274 GB data, 1710 kobjects
854 GB used, 2939 GB / 3793 GB avail
128 active+clean
client io 181 kB/s wr, 0 op/s rd, 5 op/s wr

ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
3793G 2941G 851G 22.46
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 822G 0
cephfs_metadata 1 17968k 0 822G 101458
cephfs_data 2 273G 24.99 822G 1643059

kernel mount cephfs
grep design /etc/fstab
192.168.65.24:,192.168.65.55:,192.168.65.56:/ /design ceph 
rw,relatime,name=design,secret=...,_netdev,noatime 0 0

I can not delete same files and dir.
rm -rf /design/4*
rm: cannot remove '/design/4695/8/6-50kb.jpg': No space left on device
rm: cannot remove '/design/4695/8/9-300kb.png': No space left on device
rm: cannot remove '/design/4695/8/0-300kb.png': No space left on device

What ideas?
help




---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rm: cannot remove dir and files (cephfs)

2018-02-09 Thread Андрей
ceph version 10.2.10

ceph -s
cluster 97f833aa-cc6f-41d5-bf82-bda5c09fd664
health HEALTH_OK
monmap e3: 3 mons at 
{web23=192.168.65.24:6789/0,web25=192.168.65.55:6789/0,web26=192.168.65.56:6789/0}
election epoch 1198, quorum 0,1,2 web23,web25,web26
fsmap e464: 1/1/1 up {0=web26=up:active}, 3 up:standby
osdmap e325: 4 osds: 4 up, 4 in
flags sortbitwise,require_jewel_osds
pgmap v42475: 128 pgs, 3 pools, 274 GB data, 1710 kobjects
854 GB used, 2939 GB / 3793 GB avail
128 active+clean
client io 181 kB/s wr, 0 op/s rd, 5 op/s wr

ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED 
3793G 2941G 851G 22.46 
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS 
rbd 0 0 0 822G 0 
cephfs_metadata 1 17968k 0 822G 101458 
cephfs_data 2 273G 24.99 822G 1643059

kernel mount cephfs
grep design /etc/fstab
192.168.65.24:,192.168.65.55:,192.168.65.56:/ /design ceph 
rw,relatime,name=design,secret=...,_netdev,noatime 0 0

I can not delete same files and dir.
rm -rf /design/4*
rm: cannot remove '/design/4695/8/6-50kb.jpg': No space left on device
rm: cannot remove '/design/4695/8/9-300kb.png': No space left on device
rm: cannot remove '/design/4695/8/0-300kb.png': No space left on device

What ideas?
help


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com