> It never has been.  In cache_inode, a pin-ref kept it from being reaped, now
> any ref beyond 1 keeps it.

Guess we need to do something about that... We need to put limits on state 
somewhere, that would take care of it mostly. We could still have some files in 
excess of high water mark due to active I/O threads, but that quantity is 
limited by the worker thread count.

Frank

> On Fri, Aug 4, 2017 at 1:31 PM, Frank Filz <ffilz...@mindspring.com> wrote:
> >> I'm hitting a case where mdcache keeps growing well beyond the high
> >> water mark. Here is a snapshot of the lru_state:
> >>
> >> 1 = {entries_hiwat = 100000, entries_used = 2306063, chunks_hiwat =
> > 100000,
> >> chunks_used = 16462,
> >>
> >> It has grown to 2.3 million entries and each entry is ~1.6K.
> >>
> >> I looked at the first entry in lane 0, L1 queue:
> >>
> >> (gdb) p LRU[0].L1
> >> $9 = {q = {next = 0x7fad64256f00, prev = 0x7faf21a1bc00}, id =
> >> LRU_ENTRY_L1, size = 254628}
> >> (gdb) p (mdcache_entry_t *)(0x7fad64256f00-1024)
> >> $10 = (mdcache_entry_t *) 0x7fad64256b00
> >> (gdb) p $10->lru
> >> $11 = {q = {next = 0x7fad65ea0f00, prev = 0x7d67c0 <LRU>}, qid =
> >> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 0, cf = 0}
> >> (gdb) p $10->fh_hk.inavl
> >> $13 = true
> >
> > The refcount 2 prevents reaping.
> >
> > There could be a refcount leak.
> >
> > Hmm, though, I thought the entries_hwmark was a hard limit, guess not...
> >
> > Frank
> >
> >> Lane 1:
> >> (gdb) p LRU[1].L1
> >> $18 = {q = {next = 0x7fad625c0300, prev = 0x7faec08c5100}, id =
> >> LRU_ENTRY_L1, size = 253006}
> >> (gdb) p (mdcache_entry_t *)(0x7fad625c0300 - 1024)
> >> $21 = (mdcache_entry_t *) 0x7fad625bff00
> >> (gdb) p $21->lru
> >> $22 = {q = {next = 0x7fad66fce600, prev = 0x7d68a0 <LRU+224>}, qid =
> >> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}
> >>
> >> (gdb) p $21->fh_hk.inavl
> >> $24 = true
> >>
> >> As per LRU_ENTRY_RECLAIMABLE(), these entry should be reclaimable.
> >> Not sure why it is not able to claim it. Any ideas?
> >>
> >> Thanks,
> >> Pradeep
> >>
> >>
> > ----------------------------------------------------------------------
> > ------
> > --
> >> Check out the vibrant tech community on one of the world's most
> >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> >> _______________________________________________
> >> Nfs-ganesha-devel mailing list
> >> Nfs-ganesha-devel@lists.sourceforge.net
> >> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> >
> >
> > ---
> > This email has been checked for viruses by Avast antivirus software.
> > https://www.avast.com/antivirus
> >
> >
> > ----------------------------------------------------------------------
> > -------- Check out the vibrant tech community on one of the world's
> > most engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> > _______________________________________________
> > Nfs-ganesha-devel mailing list
> > Nfs-ganesha-devel@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to