Re[2]: [Beowulf] How Can Microsoft's HPC Server Succeed?

2008-04-19 Thread Jan Heichler
Hallo Bogdan, Freitag, 18. April 2008, meintest Du: BC Sorry to divert a bit the thread towards its initial subject and away BC from the security issues currently discussed... BC I've just seen a presentation from a University (which shall remain BC unnamed) which partnered with Microsoft

Re: [Beowulf] Improving access to a Linux beowulf cluster for Windows users

2008-04-19 Thread Mikhail Kuzminsky
In message from Kilian CAVALOTTI [EMAIL PROTECTED] (Fri, 18 Apr 2008 16:38:26 -0700): ... The same benchmark on lower-end hardware (E5345) running Linux (same 4X DDR IB though), gives roughly 30% better results: #--- # Benchmarking PingPong #

RE: [Beowulf] How Can Microsoft's HPC Server Succeed?

2008-04-19 Thread John Vert
Microsoft's MPI stack has never used DAPL. V1 (Windows Compute Cluster Server 2003) uses WinsockDirect. High-speed interconnects like Infiniband plug into this stack through the existing WinsockDirect provider interface. V2 (Windows HPC Server 2008, coming soon) introduces a new provider

Re[2]: [Beowulf] How Can Microsoft's HPC Server Succeed?

2008-04-19 Thread Donald Becker
On Sat, 19 Apr 2008, Jan Heichler wrote: BC But then I changed my mind when I started BC to hear what a great feature it is to have several nodes booting and BC installing the OS in the same 50 minutes (yes, minutes!) that a single BC node takes, due to a wonderful feature called multicast.