Re: [lustre-discuss] [EXTERNAL] [BULK] MDS hardware - NVME?

2024-01-10 Thread Cameron Harr via lustre-discuss
On 1/10/24 11:59, Thomas Roth via lustre-discuss wrote: Actually we had MDTs on software raid-1 *connecting two JBODs* for quite some time - worked surprisingly well and stable. I'm glad it's working for you! Hmm, if you have your MDTs on a zpool of mirrors aka raid-10, wouldn't going towa

Re: [lustre-discuss] [EXTERNAL] [BULK] MDS hardware - NVME?

2024-01-10 Thread Thomas Roth via lustre-discuss
Actually we had MDTs on software raid-1 *connecting two JBODs* for quite some time - worked surprisingly well and stable. Still, personally I would prefer ZFS anytime. Nowadays we have all our OSTs are on ZFS, very stable. Of course, a look at all the possible ZFS parameters tells me that surel

Re: [lustre-discuss] [EXTERNAL] [BULK] MDS hardware - NVME?

2024-01-09 Thread Cameron Harr via lustre-discuss
Thomas, We value management over performance and have knowingly left performance on the floor in the name of standardization, robustness, management, etc; while still maintaining our performance targets. We are a heavy ZFS-on-Linux (ZoL) shop so we never considered MD-RAID, which, IMO, is ver

Re: [lustre-discuss] [EXTERNAL] [BULK] MDS hardware - NVME?

2024-01-08 Thread Jeff Johnson
Today nvme/mdraid/ldiskfs will beat nvme/zfs on MDS IOPs but you can close the gap somewhat with tuning, zfs ashift/recordsize and special allocation class vdevs. While the IOPs performance favors nvme/mdraid/ldiskfs there are tradeoffs. The snapshot/backup abilities of ZFS and the security it prov

Re: [lustre-discuss] [EXTERNAL] [BULK] MDS hardware - NVME?

2024-01-08 Thread Thomas Roth via lustre-discuss
Hi Cameron, did you run a performance comparison between ZFS and mdadm-raid on the MDTs? I'm currently doing some tests, and the results favor software raid, in particular when it comes to IOPS. Regards Thomas On 1/5/24 19:55, Cameron Harr via lustre-discuss wrote: This doesn't answer your qu

Re: [lustre-discuss] [EXTERNAL] [BULK] MDS hardware - NVME?

2024-01-08 Thread Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.] via lustre-discuss
enough that we never set up automated failover via corosync or something similar. From: Vinícius Ferrão Date: Sunday, January 7, 2024 at 12:06 PM To: "Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]" Cc: Thomas Roth , Lustre Diskussionsliste Subject: Re: [lustre-discuss] [EXTER

Re: [lustre-discuss] [EXTERNAL] [BULK] MDS hardware - NVME?

2024-01-07 Thread Vinícius Ferrão via lustre-discuss
Hi Vicker may I ask if you have any kind of HA on this setup? If yes I’m interested on how the ZFS pools would migrate from one server to another in case of failure. I’m considering the typical lustre deployment were you have two servers attached to two JBODs using a multipath SAS topology with

Re: [lustre-discuss] [EXTERNAL] [BULK] MDS hardware - NVME?

2024-01-05 Thread Cameron Harr via lustre-discuss
This doesn't answer your question about ldiskfs on zvols, but we've been running MDTs on ZFS on NVMe in production for a couple years (and on SAS SSDs for many years prior). Our current production MDTs using NVMe consist of one zpool/node made up of 3x 2-drive mirrors, but we've been experiment

Re: [lustre-discuss] [EXTERNAL] [BULK] MDS hardware - NVME?

2024-01-05 Thread Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.] via lustre-discuss
We are in the process of retiring two long standing LFS's (about 8 years old), which we built and managed ourselves. Both use ZFS and have the MDT'S on ssd's in a JBOD that require the kind of software-based management you describe, in our case ZFS pools built on multipath devices. The MDT in