Has anyone heard of or seen TIPC <tipc.sourceforge.net> used in a
Beowulf Cluster?

I haven't. I sat in on tipc meetings at OLS a few times, and have the impression that TIPC people are much more into telecom/footprint issues rather than HPC. (and yes, I believe these are very different focuses - for HPC, the main issue is latency (since bandwidth is not that hard.))

I _think_ I'm not confusing TIPC with SCTP (which also seems to be rather
telecom-oriented.)

here are some kind of shocking performance measures:
http://www.strlen.de/tipc/

no mention of latency there.

Some folks from Wind River (creators of the protocol I think) came and
gave a talk about it at my school. They said it can be used over IP,
or even on it's own through ethernet, and would even work with myrinet
or infiniband with proper drivers.

well, TIPC is trying to do a lot that TCP isn't. for instance, I think it's trying to do fairly full group membership as well as topology-aware routing. I'm not sure these are as critical to HPC-type clustering as they would be for HA-type clustering.

I'm also a bit skeptical of a protocol that aims to put everything into
one kernel-resident layer...

I'm still not very familiar with programming a Beowulf, but Inter
Process Communication is an equally viable paradigm just like Message
Passing, right?

TIPC is a form of MP. don't confuse MP with MPI! MPI is important and widespread, but I don't think many people would say that it's perfect. MPI-over TCP in particular is kind of a shame, since TCP is really a protocol designed for flakey, overloaded, heterogenous WANs, not the kind of dedicated, homogenous, flat network you find in an HPC cluster.

I'm looking forward to OpenMX - it's a message-passing layer amenable to ethernet, but well-suited for MPI. any OpenMX people care to comment?

regards, mark hahn.
_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to