no, on oss we found only the client who reported " dirty page discard "
being evicted.
we hit this again last night, and on oss we can see logs like:
"
[Tue Aug 25 23:40:12 2020] LustreError:
14278:0:(ldlm_lockd.c:256:expired_lock_main()) ### lock callback timer
expired after 100s: evicting
I am looking for the various policy rules which can be applied for Lustre
Persistent Client Cache. In the docs, I see below example using projid,
fname and uid. Where can I find a complete list of supported rules.
Also is there a way for PCC to only cache content of few folders
Your output shows Infiniband NIDs (@o2ib). If you are mounting @tcp what is
your tcp access method to the Infiniband file system? Multihomed? lnet
router?
--Jeff
On Tue, Aug 25, 2020 at 8:32 AM Peeples, Heath
wrote:
> We have just build a 2.12.5 cluster. When trying to mount the fs (via
>
Was this an initial mount of a new file system or a new TCP client being
introduced to an existing file system? Can you describe your setup a little
more?
On Tue, Aug 25, 2020 at 9:32 AM Peeples, Heath
wrote:
> We have just build a 2.12.5 cluster. When trying to mount the fs (via
> tcp). I
The I/O was not fully committed after close() from the client. Are you
experiencing high numbers of evictions?
On Tue, Aug 25, 2020 at 9:12 AM 肖正刚 wrote:
> Hi, all
>
> We found that some clients' dmesg filled up with messages like
> "
> Aug 24 19:54:34 ln5 kernel: Lustre:
>
We have just build a 2.12.5 cluster. When trying to mount the fs (via tcp). I
get the following errors. Would anyone have an idea what the problem might
be? Thanks in advance
[10680.535157] LustreError: 15c-8: MGC192.168.8.8@tcp: The configuration from
log 'ldata-client' failed (-2).
Hi, all
We found that some clients' dmesg filled up with messages like
"
Aug 24 19:54:34 ln5 kernel: Lustre:
13565:0:(llite_lib.c:2759:ll_dirty_page_discard_warn()) public1: dirty page
discard: 10.10.2.11@o2ib:10.10.2.12@o2ib:/public1/fid:
[0x27a82:0x1680f:0x0]/ may get corrupted (rc -108)
Hi,
Still hoping for a reply...
It seems to me that old groups are more affected by the issue than new ones
that were created after a major disk migration.
It seems that the quota enforcement is somehow based on a counter other
than the accounting as the accounting produces the same numbers as