On Tue, Sep 9, 2008 at 9:18 AM, Tim Kroeger <[EMAIL PROTECTED]> wrote: > Dear John, > > On Tue, 9 Sep 2008, John Peterson wrote: > >>>>>> On linux, lspci will tell you something about the hardware connected >>>>>> to the PCI bus. This may list the interconnect device(s). >>>>> >>>>> lspci seems not to be installed on that machine, although it is linux. >>>> >>>> Try /sbin/lspci - there is a good chance /sbin is not in your path. >>> >>> Ah, that was the trick, thank you. I have attached the output. >> >> I should probably have mentioned: we would need the output of lspci >> from a compute node, the head node may or may not be on a fast >> interconnect. As it stands it looks like you just have dual GigE. > > Compute node and head node give exactly the same output. So does this mean > I have a very slow interconnect, and is this the reason for the bad > scalability?
As I understand it, the latency of gigabit ethernet is still quite bad when compared to specialized communication hardware. I'm not completely blaming GigE for the poor scaling results you are seeing, but it likely plays a factor. Now that we know what kind of hardware you're using, it gives us a better idea of what type of performance you should expect to see. -- John ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Libmesh-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/libmesh-users
