On Wed, Dec 30, 2009 at 09:47:09AM -0800, Brian McGrew wrote:
> Good morning all:
>
> So this is kind of a generalized question but I’m going to throw it out
> here anyways! I have a rack of 2950’s with 15k SAS drives on PERC
> controllers. These boxes work great, no problems.
>
> Until now, my usage of these machines has always been specifically
> application targeted and I’ve deployed in Windows and Linux. Of course
> I’m partial to Linux. I’ve never worried about multi-platform client
> support before.
>
> Now, what I want and need to do it build a high performance storage
> system out of some of these machines. They will not be clustered or
> load balanced or anything like that. Each box will be completely
> stand-alone and independent of the other (different boxes, different
> customers).
>
> So... I have x numbers of drives on in a RAID5 array for y number of
> gigabytes of disk space. My question is, how should I go about setting
> up a high performance storage system that will support multiple clients
> (ie Windows, MacOS and Unix) but be very very fast. I’ve tried FreeNAS,
> OpenFiler, ClarkConnect and ClearOS (newer ClarkConnect) and I’m just
> not seeing the performance I need.
>
> The problem I see with these solutions is that sharing files to a
> Windows box, the performance is ok but not great. However sharing files
> to a Mac or Linux box (via smb, cifs or nfs) just flat sucks. FTP works
> fine but then again, using FTP, Active Directory authentication is so
> slow it’s almost unbearable.
>
> Any recommendations??? Of course I would prefer open source but if I
> have to pony up a few bucks to get some kind of a commercial product
> that’ll do what I need, I will. All the boxes have dual GigE
> connections that are trunked to the switches, so effectively 2GB of
> bandwidth coming from each box. Also all the boxes have 8 (or more) GB
> of RAM, so, hardware performance isn’t so much an issue.
>
> Thanks!
Can you provide any numbers? Is your workload primarily sequential
reads (long reads needing max throughput) or more random IO? What type
of speeds are you seeing with FTP vs NFS/CIFS?
AD authentication is being done via winbind? You can do some tuning
there perhaps to make it quicker...
Some obvious and generic suggestions:
- Dedicated storage network
- Jumbo frames
- Tuning wsize/rsize on NFS/CIFS (perhaps tuning on clients as
well?)
- Link aggregation (if switches allow and throughput needs demand
it)
Ray
_______________________________________________
Linux-PowerEdge mailing list
[email protected]
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq