Hello to everyone.
We have a lustre (v1.8.4) test cluster with 2 MDSs, 2OSSs and 4 Clients.
I am experimenting with the quotas but something seems to not work.
In first I did a
cli1# lfs quotacheck /lustre
cli1# lfs quotaon -ug /lustre
cli1# lfs quota /lustre
To get ::
Disk quotas for user
Thank you very much for your reply.
So the problem was that the OST2 was inactive.
mds2# cat /proc/fs/lustre/lov/jtest1-mdtlov/target_obd
0: jtest1-OST_UUID ACTIVE
1: jtest1-OST0001_UUID ACTIVE
2: jtest1-OST0002_UUID INACTIVE
3: jtest1-OST0003_UUID ACTIVE
And I fixed it by ::
mds2# lctl dl
Hello everyone,
On lustre v1.8.4, since last night we are facing problems on our scratch
FS. So I went to the respective mds where on /var/log/messages I found the
following ::
Jan 24 23:12:13 mds kernel: LustreError:
5107:0:(ldlm_lock.c:430:__ldlm_handle2lock()) ASSERTION(handle) failed
Jan
Hello everyone,
We are planning a new system and I would like your help about lustre.
We will have only 2 nodes for the lustre servers and I would like to ask
your opinion on how to configure failover.
I was thinking to set
node1 active :: mds/mgs, passive :: oss
node2 active :: oss, passive
Hello everyone,
In our v.1.8.4 lustre production system we face some problems with the
quotas.
I have a group with only 1 user.
When I do a ::
# lfs quota -g groupID /lustre
Disk quotas for group groupID (gid 7850):
Filesystem kbytes quota limit grace files quota limit
grace
Hello everyone,
In our v.1.8.4 lustre production system we face some problems with the
quotas.
I have a group with only 1 user.
When I do a ::
# lfs quota -g groupID /lustre
Disk quotas for group groupID (gid 7850):
Filesystem kbytes quota limit grace files quota limit
grace