>
> On 06/06/2018 12:22 PM, Andras Pataki wrote:
> > Hi Greg,
> >
> > The docs say that client_cache_size is the number of inodes that are
> > cached, not bytes of data. Is that incorrect?
>
Oh whoops, you're correct of course. Sorry about that!
On Wed, Jun 6, 2018 at 12:33 PM Andras Pataki
wro
Staring at the logs a bit more it seems like the following lines might
be the clue:
2018-06-06 08:14:17.615359 7fffefa45700 10 objectcacher trim start:
bytes: max 2147483640 clean 2145935360, objects: max 8192 current 8192
2018-06-06 08:14:17.615361 7fffefa45700 10 objectcacher trim finish:
Hi Greg,
The docs say that client_cache_size is the number of inodes that are
cached, not bytes of data. Is that incorrect?
Andras
On 06/06/2018 11:25 AM, Gregory Farnum wrote:
On Wed, Jun 6, 2018 at 5:52 AM, Andras Pataki
wrote:
We're using CephFS with Luminous 12.2.5 and the fuse clien
On Wed, Jun 6, 2018 at 5:52 AM, Andras Pataki
wrote:
> We're using CephFS with Luminous 12.2.5 and the fuse client (on CentOS 7.4,
> kernel 3.10.0-693.5.2.el7.x86_64). Performance has been very good
> generally, but we're currently running into some strange performance issues
> with one of our ap
We're using CephFS with Luminous 12.2.5 and the fuse client (on CentOS
7.4, kernel 3.10.0-693.5.2.el7.x86_64). Performance has been very good
generally, but we're currently running into some strange performance
issues with one of our applications. The client in this case is on a
higher latenc
Hello Ashley,
On Wed, Oct 18, 2017 at 12:45 AM, Ashley Merrick wrote:
> 1/ Is there any options or optimizations that anyone has used or can suggest
> to increase ceph-fuse performance?
You may try playing with the sizes of reads/writes. Another
alternative is to use libcephfs directly to avoid
Hello,
I have been trying cephfs on the latest 12.x release.
Performance under cephfs mounted via kernel seems to be as expected maxing out
the underlying storage / resources using kernel version 4.13.4.
However when it comes to mounting cephfs via ceph-fuse looking at performance
of 5-10% for