Dear Lustre Experts,

I have upgraded a test system from Lustre 2.12.7 with ZFS 0.7.13 backend to Lustre 2.15.0-RC5 with ZFS 2.0.7 backend. The data has been transferred using zfs send/recv to new hardware.

Project quotas are activated on all zpools as well as in the lustre config. For new files and folders, that have been created after the upgrade, everything works complete as expected. For old files and folders, setting the project quota fails:

lfs project -sr -p1 old-folder
lfs: failed to set xattr for 'old-folder': No such device or address

This issue is already reported and has already been there in Lustre 2.14: https://jira.whamcloud.com/browse/LU-15640

A workaround is to migrate the old folder between MDTs. That is recreating the attributes and apparently the project ID attribute is afterwards included and can be set. Setting the project ID after migrating between MDTs results in correctly shown number of files with lfs quota -p. Unfortunately the consumed disk space shown by lfs quota -p is much to small. Also an LBUG is triggered from time to time on the OSTs. This has also been reported: https://jira.whamcloud.com/browse/LU-13189. The Problem seems to be, that the osd-objects are expected to have the project ID attribute, but they don't. The only way to get out of this situation is to evict all clients and to delete the files in question. Otherwise you would directly run into the next LBUG after the reboot of the OST. Another workaround is to migrate all data to different OSTs (also old upgraded). As for the meta data, that creates new objects which include the project ID.

The migration workarounds are of course extremely expensive on larger systems. According to the zfs documentation, an update of the on-disk format can be triggered by zfs set version=current (https://openzfs.github.io/openzfs-docs/man/7/zpool-features.7.html). However, that is not doing anything.

I guess I'm not the only one with plans to upgrade a system with ZFS backend. Did others have the same issues?

Best regards,
Robert

_______________________________________________
lustre-discuss mailing list
[email protected]
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to