On Fri, Apr 18, 2008 at 04:22:13PM -0700, John Vert wrote: > V1 ("Windows Compute Cluster Server 2003") uses > WinsockDirect. High-speed interconnects like Infiniband plug into > this stack through the existing WinsockDirect provider interface.
Ah. It's still considerably slower than real native support. > V2 ("Windows HPC Server 2008", coming soon) introduces a new > provider interface called NetworkDirect which maps much better to > the hardware. So far we are seeing excellent performance and the 2 > microsecond latency quoted earlier is one example. OK, so how about a test sensitive to overhead? Like mpi-multibw? Latency isn't a very good test because most programs don't just ping-pong between 2 processes. They send to a bunch of neighbors at once. I could easily believe that it's better than V1. But most people want to know if it's as fast as what they're used to. > Our V2 job scheduler also has a lot of performance improvements. If > you care about how long the scheduler takes to submit, allocate, > reserve, and activate a 1,000+ CPU job, I think you'll like > that. This is really nothing to do with "Linux clusters" as it's > largely a job scheduler issue and most job schedulers support > multiple platforms. Then why the slam against slow Linux cluster startup times in a blog and presentations? -- greg _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf