Hi,

There are a few further messages on the Lustre server like this -

Jul 24 08:25:35 lustre-3ware kernel: Lustre: Found inode with zero generation or link -- this may indicate disk corruption (inode: 10251612/839130903, link 0, count 1)
Jul 24 08:25:35 lustre-3ware last message repeated 2 times
Jul 24 08:25:35 lustre-3ware kernel: Lustre: Skipped 6 previous similar messages Jul 24 08:25:35 lustre-3ware kernel: Lustre: Found inode with zero generation or link -- this may indicate disk corruption (inode: 10251612/839130903, link 0, count 1) Jul 24 08:25:35 lustre-3ware kernel: Lustre: Skipped 6 previous similar messages Jul 24 09:09:10 lustre-3ware kernel: Lustre: Found inode with zero generation or link -- this may indicate disk corruption (inode: 10251612/839130903, link 0, count 1) Jul 24 09:09:10 lustre-3ware kernel: Lustre: Found inode with zero generation or link -- this may indicate disk corruption (inode: 10251612/839130903, link 0, count 1) Jul 24 09:09:10 lustre-3ware kernel: Lustre: Skipped 3 previous similar messages Jul 24 09:09:10 lustre-3ware kernel: Lustre: Found inode with zero generation or link -- this may indicate disk corruption (inode: 10251612/839130903, link 0, count 1) Jul 24 09:09:10 lustre-3ware kernel: Lustre: Skipped 3 previous similar messages Jul 24 09:09:10 lustre-3ware kernel: Lustre: Found inode with zero generation or link -- this may indicate disk corruption (inode: 10251612/839130903, link 0, count 1)

          This is preceded by lots of the following error messages -
Jul 24 01:31:24 lustre-3ware kernel: LustreError: 3654:0:(ldlm_lib.c:1554:target_handle_dqacq_callback()) dqacq failed! (rc:-5)

The data on the volumes is fine. Is the disk corruption error related to the quota problem or is this a new issue? The 3ware raid volume is working fine. Thanks very much.


Regards
Balagopal



Balagopal Pillai wrote:
On Tue, 24 Jul 2007, Balagopal Pillai wrote:
Hi,

There is one more line in the error messages i missed to include last time -

Jul 24 01:31:19 lustre-3ware kernel: LustreError: 3657:0:(fsfilt-ldiskfs.c:1962:fsfilt_ldiskfs_dquot()) operate dquot before it's enabled! Thanks
Balagopal

Hi, > I am having a quota issue with Lustre. Here are the default limits set for an account - /home 0 400000 450000 189 100000 100000

This sets a quota of about 300 MB or so. After reading the documenatation, i throught this would have set ~4 G or so, but it is one zero less. The quota limit is working though.

But today i found the following issue with two accounts. The account goes over quota showing higher block utilization, while the real space occupied by the account is only a few megabytes on the filesystem.
Here is an example -

     Filesystem  blocks   quota   limit   grace   files   quota   limit
grace
          /home  307796  600000  650000             221  100000  100000
home-MDT0000_UUID
                    316       0  102400             221       0    5000
home-OST0000_UUID
                 307480       0  409600
du -sh ../username/
3.8M    ../username/
In the Lustre server, there are quite a few of the following error messages -

Jul 24 01:31:09 lustre-3ware kernel: LustreError: 3659:0:(fsfilt-ldiskfs.c:1962:fsfilt_ldiskfs_dquot()) Skipped 11 previous similar messages Jul 24 01:31:09 lustre-3ware kernel: LustreError: 3659:0:(quota_master.c:194:lustre_dqget()) can't read dquot from admin quotafile! (rc:-5) Jul 24 01:31:09 lustre-3ware kernel: LustreError: 3659:0:(quota_master.c:194:lustre_dqget()) Skipped 11 previous similar messages Jul 24 01:31:09 lustre-3ware kernel: LustreError: 3659:0:(ldlm_lib.c:1554:target_handle_dqacq_callback()) dqacq failed! (rc:-5) Jul 24 01:31:09 lustre-3ware kernel: LustreError: 3537:0:(quota_context.c:422:dqacq_completion()) acquire qunit got error! (rc:-5)

Regards
Balagopal
_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss


_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to