oneously, please notify the sender and delete it, together with any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
>
> ------
>
>
> From: ceph-users [
n
[mnel...@redhat.com]
Sent: Thursday, December 08, 2016 10:25 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] filestore_split_multiple hardcoded maximum?
I don't want to retype it all, but you guys might be interested in the
discussion under section 3 of this post here:
http://lists.c
I don't want to retype it all, but you guys might be interested in the
discussion under section 3 of this post here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/012987.html
basically the gist of it is:
1) Make sure SELinux isn't doing security xattr lookups for
David, you might also be interested in the new Jewel 10.2.4 tool called
'ceph-objectstore-tool' from Josh.
It allows to split filestore directories offline
(http://tracker.ceph.com/issues/17220). Unfortunatly not merge apprently.
Regards,
Frédéric.
- Le 27 Sep 16, à 0:42, David
Hi David,
I'm surprised your message didn't get any echo yet. I guess it depends on how
many files your OSDs get to store on filesystem which depends essentialy on use
cases.
We're having similar issues with a 144 osd cluster running 2 pools. Each one
holds 100 M objects.One is replication
We are running on Hammer 0.94.7 and have had very bad experiences with PG
folders splitting a sub-directory further. OSDs being marked out, hundreds of
blocked requests, etc. We have modified our settings and watched the behavior
match the ceph documentation for splitting, but right now the