On Wed, Jun 01, 2011 at 09:13:03PM -0700, Wayne E. Van Loon Sr. wrote:
> At the start of each optimization job, I will have a few data files that 
> need to be distributed to each of the helper optimization processes. 
> Rather than have the central optimizer / server send these files 24 
> times, some way to broadcast these files to all machines / processes at 
> the same time might be nice.

They do something like that on large compute clusters, using MTFTP, UFTP
or UDPcast (I think- I've been out of the loop a while).  For a cluster as
small as yours, I suspect it's not going to help much.  If the files are
big enough for it to make a difference, they're probably too big for
reliable multicast distribution.  Better to make sure you have a nice
healthy gigabit backbone, and perhaps let the hosts pull their files over
NFS rather than pushing them.

Can you comment on the nature of the compute problem?
_______________________________________________
PLUG mailing list
[email protected]
http://lists.pdxlinux.org/mailman/listinfo/plug

Reply via email to