Hi Shambhu-
I think lustre-discuss might be able to help you better if you were to explain
why it is that you want to mount a Lustre filesystem as a block device. Is it
just to get it to show up in the output of lsblk? Would you prefer the output
of findmnt?
-Laura
It's been a while since I've worked with ZFS servers, but one old chestnut that
caused problems with ZFS 0.7 on the MDTs was the variable dnode size feature.
I believe there was a tunable, something like "dnodesize=auto" that caused
problems, and this could be changed to "dnodesize=1024" or
We have also seen a similar issue in the past. The bug report is here
which talks about the zfs patches:
https://jira.whamcloud.com/browse/LU-13536
thanks
-k
--
Kaizaad Bilimorya
Systems Administrator - SHARCNET | http://www.sharcnet.ca
Digital Research Alliance of Canada
On Fri, Mar 17, 2023 at
Hi all,
it may not help directly, but...
your error is related to zfs, not to lustre. We had for years a problem
with kernel panic on our lustre mds (lustre-2.12.*, zfs-0.7.*) when the
load is high, until we
integrated patches 78e213946 and 58769a4eb in our zfs-0.7.13 sources:
git cherry-pick
Unfortunately this problem seems to be getting worse, to the point where ZFS
Panics immediately after Lustre recovery completes when the system is under
load.
Luckily this happened on our /home filesystem which is relatively small. We are
rebuilding onto spare hardware so we can return the
Hi Andreas,
I'm talking the order of ~10,000s of project IDs.
I've been thinking the same as you, that is, doing PROJID=1M + UID etc.
However, in our case, it might be better to rely on some scripting and an
external DB, to keep track of the latest added ID, so that we could increment
the