On Mar 4, 2023, at 02:50, 覃江龙 via lustre-discuss 
<lustre-discuss@lists.lustre.org> wrote:
> 
> Dear Developer,
> I hope this message finds you well. I am currently working with a Lustre file 
> system installed on two nodes, with a mounted client and NFS connection to 
> the Lustre client directory. When I generated traffic into the Lustre 
> directory and one of the nodes failed, the MGS and OST services switched to 
> the second node and it took five to six minutes for the traffic to resume. 
> However, when I switched to using an ext3 file system, the traffic resumed in 
> only one to two minutes.
> 
> I was wondering if you could shed some light on why the Lustre switch is 
> taking longer, and how I could potentially address this issue. Thank you for 
> your time and expertise.

Not that I want to discourage Lustre usage, but if you can run your workload 
from a single-node ext3 (really should be ext4) filesystem then that will be 
much less complex than using Lustre.

Lustre is a scalable distributed filesystem and needs to deal with a far more 
difficult environment than a single-node ext3 filesystem.  It works with up to 
thousands of servers, and up to low tens of thousands of clients sharing a 
single namespace.

Using a 2-node Lustre filesystem to re-export NFS is mostly missing the value 
of Lustre, which is scalability and high performance.  If you want to use 
Lustre, you would be much better off to mount the filesystem directly with 
Lustre on the clients.  This will give you better performance, better data 
consistency vs. NFS, and less complexity compared to Lustre + NFS re-export.  
The main reason for NFS re-export of Lustre is to allow a few clients (e.g. 
data capture hardware, non-Linux clients) to access a Lustre filesystem that is 
mostly used by native Lustre clients.

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Whamcloud







_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to