May be some one can spot a new light,
1. Only SSD-cache OSDs affected by this issue
2. Total cache OSD count is 12x60GiB, backend filesystem is ext4
3. I have created 2 cache tier pools with replica size=3 on that OSD,
both with pg_num:400, pgp_num:400
4. There was a crush ruleset:
On Tue, Mar 24, 2015 at 12:13 AM, Christian Balzer ch...@gol.com wrote:
On Tue, 24 Mar 2015 09:41:04 +0300 Kamil Kuramshin wrote:
Yes I read it and do no not understand what you mean when say *verify
this*? All 3335808 inodes are definetly files and direcories created by
ceph OSD process:
Yes I read it and do no not understand what you mean when say *verify this*?
All 3335808 inodes are definetly files and direcories created by ceph
OSD process:
*tune2fs 1.42.5 (29-Jul-2012)*
Filesystem volume name: none
Last mounted on: /var/lib/ceph/tmp/mnt.05NAJ3
Filesystem UUID:
On Tue, 24 Mar 2015 09:41:04 +0300 Kamil Kuramshin wrote:
Yes I read it and do no not understand what you mean when say *verify
this*? All 3335808 inodes are definetly files and direcories created by
ceph OSD process:
What I mean is how/why did Ceph create 3+ million files, where in the tree
Yes, I understand that.
The initial purpose of first email was just an advise for new comers. My
fault was in that I was selected ext4 for SSD disks as backend.
But I did not foresee that inode number can reach its limit before the
free space :)
And maybe there must be some sort of warning
You could fix this by changing your block size when formatting the
mount-point with the mkfs -b command. I had this same issue when dealing
with the filesystem using glusterfs and the solution is to either use a
filesystem that allocates inodes automatically or change the block size
when you
On Mon, 23 Mar 2015 15:26:07 +0300 Kamil Kuramshin wrote:
Yes, I understand that.
The initial purpose of first email was just an advise for new comers. My
fault was in that I was selected ext4 for SSD disks as backend.
But I did not foresee that inode number can reach its limit before the
In my case there was cache pool for ec-pool serving RBD-images, and
object size is 4Mb, and client was an /kernel-rbd /client
each SSD disk is 60G disk, 2 disk per node, 6 nodes in total = 12 OSDs
in total
23.03.2015 12:00, Christian Balzer пишет:
Hello,
This is rather confusing, as
Recently got a problem with OSDs based on SSD disks used in cache tier
for EC-pool
superuser@node02:~$ df -i
FilesystemInodes IUsed *IFree* IUse% Mounted on
...
/dev/sdb13335808 3335808 *0* 100%
/var/lib/ceph/osd/ceph-45
/dev/sda1
Hello,
This is rather confusing, as cache-tiers are just normal OSDs/pools and
thus should have Ceph objects of around 4MB in size by default.
This is matched on what I see with Ext4 here (normal OSD, not a cache
tier):
---
size:
/dev/sde1 2.7T 204G 2.4T 8% /var/lib/ceph/osd/ceph-0
10 matches
Mail list logo