On 12/30/09 9:56 AM, "Ray Van Dolson" <[email protected]> wrote:
> On Wed, Dec 30, 2009 at 09:47:09AM -0800, Brian McGrew wrote:
> Good morning
> all:
>
> So this is kind of a generalized question but I¹m going to throw it
> out
> here anyways! I have a rack of 2950¹s with 15k SAS drives on PERC
>
> controllers. These boxes work great, no problems.
>
> Until now, my usage of
> these machines has always been specifically
> application targeted and I¹ve
> deployed in Windows and Linux. Of course
> I¹m partial to Linux. I¹ve never
> worried about multi-platform client
> support before.
>
> Now, what I want
> and need to do it build a high performance storage
> system out of some of
> these machines. They will not be clustered or
> load balanced or anything
> like that. Each box will be completely
> stand-alone and independent of the
> other (different boxes, different
> customers).
>
> So... I have x numbers
> of drives on in a RAID5 array for y number of
> gigabytes of disk space. My
> question is, how should I go about setting
> up a high performance storage
> system that will support multiple clients
> (ie Windows, MacOS and Unix) but
> be very very fast. I¹ve tried FreeNAS,
> OpenFiler, ClarkConnect and ClearOS
> (newer ClarkConnect) and I¹m just
> not seeing the performance I need.
>
>
> The problem I see with these solutions is that sharing files to a
> Windows
> box, the performance is ok but not great. However sharing files
> to a Mac or
> Linux box (via smb, cifs or nfs) just flat sucks. FTP works
> fine but then
> again, using FTP, Active Directory authentication is so
> slow it¹s almost
> unbearable.
>
> Any recommendations??? Of course I would prefer open source
> but if I
> have to pony up a few bucks to get some kind of a commercial
> product
> that¹ll do what I need, I will. All the boxes have dual GigE
>
> connections that are trunked to the switches, so effectively 2GB of
>
> bandwidth coming from each box. Also all the boxes have 8 (or more) GB
> of
> RAM, so, hardware performance isn¹t so much an issue.
>
> Thanks!
Can you
> provide any numbers? Is your workload primarily sequential
reads (long reads
> needing max throughput) or more random IO? What type
of speeds are you seeing
> with FTP vs NFS/CIFS?
AD authentication is being done via winbind? You can
> do some tuning
there perhaps to make it quicker...
Some obvious and generic
> suggestions:
- Dedicated storage network
- Jumbo frames
- Tuning
> wsize/rsize on NFS/CIFS (perhaps tuning on clients as
well?)
- Link
> aggregation (if switches allow and throughput needs demand
> it)
Ray
_______________________________________________
Linux-PowerEdge
> mailing
> list
[email protected]
https://lists.us.dell.com/mailman/listinfo/linux
> -poweredge
Please read the FAQ at http://lists.us.dell.com/faq
-----
It's primarily random IO. I know one customer is going to be storing Vmware
images on the box (but not a vmware server). Another is going to be a
generic file server and the last is going to be a home directory
server/desktop server for both Windows and Linux. So the speeds need to be
there.
When I move a 4GB DVD image with SMB/CIFS I'm looking at +2 hours of
transfer time vs. ftp which will transfer in under 10 minutes.
-b
_______________________________________________
Linux-PowerEdge mailing list
[email protected]
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq