Hi
We have a dataset of ~300 GB on CephFS which as being used for computations
over and over agian .. being refreshed daily or similar.
When hosting it on NFS after refresh, they are transferred, but from
there - they would be sitting in the kernel page cache of the client
until they are
Hej Jesper.
Sorry I do not have a direct answer to your question.
When looking at memory usage, I often use this command:
watch cat /rpoc/meminfo
On Sun, 14 Oct 2018 at 13:22, wrote:
> Hi
>
> We have a dataset of ~300 GB on CephFS which as being used for computations
> over and over agian
> Actual amount of memory used by VFS cache is available through 'grep
> Cached /proc/meminfo'. slabtop provides information about cache
> of inodes, dentries, and IO memory buffers (buffer_head).
Thanks, that was also what I got out of it. And why I reported "free"
output in the first as it also
Try looking in /proc/slabinfo / slabtop during your tests.
> On 14.10.2018, at 15:21, jes...@krogh.cc wrote:
>
> Hi
>
> We have a dataset of ~300 GB on CephFS which as being used for computations
> over and over agian .. being refreshed daily or similar.
>
> When hosting it on NFS after
> Try looking in /proc/slabinfo / slabtop during your tests.
I need a bit of guidance here.. Does the slabinfo cover the VFS page
cache ? .. I cannot seem to find any traces (sorting by size on
machines with a huge cache does not really give anything). Perhaps
I'm holding the screwdriver wrong?
Actual amount of memory used by VFS cache is available through 'grep Cached
/proc/meminfo'. slabtop provides information about cache of inodes, dentries,
and IO memory buffers (buffer_head).
> On 14.10.2018, at 17:28, jes...@krogh.cc wrote:
>
>> Try looking in /proc/slabinfo / slabtop during
On 14 Oct 2018, at 15.26, John Hearns wrote:
>
> This is a general question for the ceph list.
> Should Jesper be looking at these vm tunables?
> vm.dirty_ratio
> vm.dirty_centisecs
>
> What effect do they have when using Cephfs?
This situation is a read only, thus no dirty data in page cache.
This is a general question for the ceph list.
Should Jesper be looking at these vm tunables?
vm.dirty_ratio
vm.dirty_centisecs
What effect do they have when using Cephfs?
On Sun, 14 Oct 2018 at 14:24, John Hearns wrote:
> Hej Jesper.
> Sorry I do not have a direct answer to your question.
>
The docs you're looking at are from the master (development) version of
ceph, so you're seeing commands that don't exist in mimic. You can swap
master for mimic in that URL.
Hopefully we'll soon have some changes to make this more apparent when
looking at the docs.
John
On Fri, 12 Oct 2018,
> On Sun, Oct 14, 2018 at 8:21 PM wrote:
> how many cephfs mounts that access the file? Is is possible that some
> program opens that file in RW mode (even they just read the file)?
The nature of the program is that it is "prepped" by one-set of commands
and queried by another, thus the RW case
Hi,
I added some OSDs into cluster(luminous) lately. The osds use
bluestoreand everything goes fine. But there is no osd log in the
log file. The log directory has only empty files.
I check my settings, "ceph daemon osd.x config show", and I get
"debug_osd": "1/5".
How can I get the new osds'
On Sun, Oct 14, 2018 at 8:21 PM wrote:
>
> Hi
>
> We have a dataset of ~300 GB on CephFS which as being used for computations
> over and over agian .. being refreshed daily or similar.
>
> When hosting it on NFS after refresh, they are transferred, but from
> there - they would be sitting in the
12 matches
Mail list logo