The performance requirements may help drive your OSS numbers for example, or interconnect, and all kinds of stuff.
Also I don't have a lot of experience with NFS/CIFS gateways, but that is perhaps it's own topic and may need some close attention.
Scott On 7/21/2015 10:57 AM, Indivar Nair wrote:
Hi ..., One of our customers has a 3 x 240 Disk SAN Storage Array and would like to convert it to Lustre. They have around 150 Workstations and around 200 Compute (Render) nodes. The File Sizes they generally work with are - 1 to 1.5 million files (images) of 10-20MB in size. And a few thousand files of 500-1000MB in size. Almost 50% of the infra is on MS Windows or Apple MACs I was thinking of the following configuration - 1 MDS 1 Failover MDS 3 OSS (failover to each other) 3 NFS+CIFS Gateway Servers FDR Infiniband backend network (to connect the Gateways to Lustre) Each Gateway Server will have 8 x 10GbE Frontend Network (connecting the clients) *Option A* 10+10 Disk RAID60 Array with 64KB Chunk Size i.e. 1MB Stripe Width 720 Disks / (10+10) = 36 Arrays. 12 OSTs per OSS 18 OSTs per OSS in case of Failover *Option B* 10+10+10+10 Disk RAID60 Array with 128KB Chunk Size i.e. 4MB Stripe Width 720 Disks / (10+10+10+10) = 18 Arrays 6 OSTs per OSS 9 OSTs per OSS in case of Failover 4MB RPC and I/O *Questions* 1. Would it be better to let Lustre do most of the striping / file distribution (as in Option A) OR would it be better to let the RAID Controllers do it (as in Option B) 2. Will Option B allow us to have lesser CPU/RAM than Option A? Regards, Indivar Nair _______________________________________________ HPDD-discuss mailing list hpdd-disc...@lists.01.org https://lists.01.org/mailman/listinfo/hpdd-discuss
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org