[ceph-users] cephfs kernel client - page cache being invaildated.

2018-10-14 Thread jesper
Hi We have a dataset of ~300 GB on CephFS which as being used for computations over and over agian .. being refreshed daily or similar. When hosting it on NFS after refresh, they are transferred, but from there - they would be sitting in the kernel page cache of the client until they are

Re: [ceph-users] cephfs kernel client - page cache being invaildated.

2018-10-14 Thread John Hearns
Hej Jesper. Sorry I do not have a direct answer to your question. When looking at memory usage, I often use this command: watch cat /rpoc/meminfo On Sun, 14 Oct 2018 at 13:22, wrote: > Hi > > We have a dataset of ~300 GB on CephFS which as being used for computations > over and over agian

Re: [ceph-users] cephfs kernel client - page cache being invaildated.

2018-10-14 Thread jesper
> Actual amount of memory used by VFS cache is available through 'grep > Cached /proc/meminfo'. slabtop provides information about cache > of inodes, dentries, and IO memory buffers (buffer_head). Thanks, that was also what I got out of it. And why I reported "free" output in the first as it also

Re: [ceph-users] cephfs kernel client - page cache being invaildated.

2018-10-14 Thread Sergey Malinin
Try looking in /proc/slabinfo / slabtop during your tests. > On 14.10.2018, at 15:21, jes...@krogh.cc wrote: > > Hi > > We have a dataset of ~300 GB on CephFS which as being used for computations > over and over agian .. being refreshed daily or similar. > > When hosting it on NFS after

Re: [ceph-users] cephfs kernel client - page cache being invaildated.

2018-10-14 Thread jesper
> Try looking in /proc/slabinfo / slabtop during your tests. I need a bit of guidance here.. Does the slabinfo cover the VFS page cache ? .. I cannot seem to find any traces (sorting by size on machines with a huge cache does not really give anything). Perhaps I'm holding the screwdriver wrong?

Re: [ceph-users] cephfs kernel client - page cache being invaildated.

2018-10-14 Thread Sergey Malinin
Actual amount of memory used by VFS cache is available through 'grep Cached /proc/meminfo'. slabtop provides information about cache of inodes, dentries, and IO memory buffers (buffer_head). > On 14.10.2018, at 17:28, jes...@krogh.cc wrote: > >> Try looking in /proc/slabinfo / slabtop during

Re: [ceph-users] cephfs kernel client - page cache being invaildated.

2018-10-14 Thread Jesper Krogh
On 14 Oct 2018, at 15.26, John Hearns wrote: > > This is a general question for the ceph list. > Should Jesper be looking at these vm tunables? > vm.dirty_ratio > vm.dirty_centisecs > > What effect do they have when using Cephfs? This situation is a read only, thus no dirty data in page cache.

Re: [ceph-users] cephfs kernel client - page cache being invaildated.

2018-10-14 Thread John Hearns
This is a general question for the ceph list. Should Jesper be looking at these vm tunables? vm.dirty_ratio vm.dirty_centisecs What effect do they have when using Cephfs? On Sun, 14 Oct 2018 at 14:24, John Hearns wrote: > Hej Jesper. > Sorry I do not have a direct answer to your question. >

Re: [ceph-users] ceph dashboard ac-* commands not working (Mimic)

2018-10-14 Thread John Spray
The docs you're looking at are from the master (development) version of ceph, so you're seeing commands that don't exist in mimic. You can swap master for mimic in that URL. Hopefully we'll soon have some changes to make this more apparent when looking at the docs. John On Fri, 12 Oct 2018,

Re: [ceph-users] cephfs kernel client - page cache being invaildated.

2018-10-14 Thread jesper
> On Sun, Oct 14, 2018 at 8:21 PM wrote: > how many cephfs mounts that access the file? Is is possible that some > program opens that file in RW mode (even they just read the file)? The nature of the program is that it is "prepped" by one-set of commands and queried by another, thus the RW case

[ceph-users] Ceph osd logs

2018-10-14 Thread Zhenshi Zhou
Hi, I added some OSDs into cluster(luminous) lately. The osds use bluestoreand everything goes fine. But there is no osd log in the log file. The log directory has only empty files. I check my settings, "ceph daemon osd.x config show", and I get "debug_osd": "1/5". How can I get the new osds'

Re: [ceph-users] cephfs kernel client - page cache being invaildated.

2018-10-14 Thread Yan, Zheng
On Sun, Oct 14, 2018 at 8:21 PM wrote: > > Hi > > We have a dataset of ~300 GB on CephFS which as being used for computations > over and over agian .. being refreshed daily or similar. > > When hosting it on NFS after refresh, they are transferred, but from > there - they would be sitting in the