Hello Lustre community, We are operating a small HPC cluster (576 compute cores) using a small Lustre parallel filesystem (64 TB) connected by Infiniband EDR network. The Lustre filesystem is implemented by a single HPE DL380 Gen10 server acting as MGS, MDS and OSS. It has two 32 TB OSTs (HPE MSA 2050). As new space is required, we will soon install 160 TB of additional storage implemented two 80 TB OSTs (HPE MSA 2060).
We looked in the Lustre documentation (10.2.1. Scaling the Lustre File System: https://doc.lustre.org/lustre_manual.xhtml#idm140220261007664) and made tests with small VMs. It appear that in our case adding this new storage would be very simple. From what we understand we should do something like this: # Create mount points for the new OSTs mkdir /mnt/ost{2,3} # The MGS is running on the same node as the OSTs mgs_node="$(sed -n -e 's/^ *- *nid: *//; T; p' < /etc/lnet.conf)" # Set the devices corresponding to the new OSTs using invariant names ost2_device=/dev/disk/by-path/... ost3_device=/dev/disk/by-path/... # Create the file systems on the new OSTs mkfs.lustre --fsname=lustrevm --mgsnode=$mgs_node --ost --index=2 $ost2_device mkfs.lustre --fsname=lustrevm --mgsnode=$mgs_node --ost --index=3 $ost3_device # Update fstab cat >> /etc/fstab << _EOF_ $ost2_device /mnt/ost2 lustre defaults,_netdev 0 0 $ost3_device /mnt/ost3 lustre defaults,_netdev 0 0 _EOF_ # Mount the new OSTs mount /mnt/ost2 mount /mnt/ost3 This appears too simple. Are we missing something ? Will the new files created by the clients use the four OSTs with no additional effort ? Thanks in advance ! Martin Audet
_______________________________________________ lustre-discuss mailing list [email protected] http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
