On Mar 7, 2011, at 6:12 AM, wessel van der aart <wes...@postoffice.nl> wrote:

> Hi All,
> 
> I've been asked to setup a 3d renderfarm at our office , at the start it 
> will contain about 8 nodes but it should be build at growth. now the 
> setup i had in mind is as following:
> All the data is already stored on a StorNext SAN filesystem (quantum ) 
> this should be mounted on a centos server trough fiber optics  , which 
> in its turn shares the FS over NFS to all the rendernodes (also centos).
> 
> Now we've estimated that the average file send to each node will be 
> about 90MB , so that's what i like the average connection to be, i know 
> that gigabit ethernet should be able to that (testing with iperf 
> confirms that) but testing the speed to already existing nfs shares 
> gives me a 55MB max. as i'm not familiar with network shares performance 
> tweaking is was wondering if anybody here did and could give me some 
> info on this?
> Also i thought on giving all the nodes 2x1Gb-eth ports and putting those 
> in a BOND, will do this any good or do i have to take a look a the nfs 
> server side first?

1Gbe can do 115MB/s @ 64K+ IO size, but at 4k IO size (NFS) 55MB/s is about it.

If you need each node to be able to read 90-100MB/s you would need to setup a 
cluster file system using iSCSI or FC and make sure the cluster file system can 
handle large block/cluster sizes like 64K or the application can handle large 
IOs and the scheduler does a good job of coalescing these (VFS layer breaks it 
into 4k chunks) into large IOs.

It's the latency of each small IO that is killing you.

-Ross

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Reply via email to