Re: [OMPI users] New to (Open)MPI

2016-09-02 Thread Dave Goodell (dgoodell)
Lachlan mentioned that he has "M Series" hardware, which, to the best of my 
knowledge, does not officially support usNIC.  It may not be possible to even 
configure the relevant usNIC adapter policy in UCSM for M Series 
modules/chassis.

Using the TCP BTL may be the only realistic option here.

-Dave

> On Sep 2, 2016, at 5:35 AM, Jeff Squyres (jsquyres)  
> wrote:
> 
> Greetings Lachlan.
> 
> Yes, Gilles and John are correct: on Cisco hardware, our usNIC transport is 
> the lowest latency / best HPC-performance transport.  I'm not aware of any 
> MPI implementation (including Open MPI) that has support for FC types of 
> transports (including FCoE).
> 
> I'll ping you off-list with some usNIC details.
> 
> 
>> On Sep 1, 2016, at 10:06 PM, Lachlan Musicman  wrote:
>> 
>> Hola,
>> 
>> I'm new to MPI and OpenMPI. Relatively new to HPC as well.
>> 
>> I've just installed a SLURM cluster and added OpenMPI for the users to take 
>> advantage of.
>> 
>> I'm just discovering that I have missed a vital part - the networking.
>> 
>> I'm looking over the networking options and from what I can tell we only 
>> have (at the moment) Fibre Channel over Ethernet (FCoE).
>> 
>> Is this a network technology that's supported by OpenMPI?
>> 
>> (system is running Centos 7, on Cisco M Series hardware)
>> 
>> Please excuse me if I have terms wrong or am missing knowledge. Am new to 
>> this.
>> 
>> cheers
>> Lachlan
>> 
>> 
>> --
>> The most dangerous phrase in the language is, "We've always done it this 
>> way."
>> 
>> - Grace Hopper
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
> 
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users


Re: [OMPI users] New to (Open)MPI

2016-09-02 Thread Jeff Squyres (jsquyres)
Greetings Lachlan.

Yes, Gilles and John are correct: on Cisco hardware, our usNIC transport is the 
lowest latency / best HPC-performance transport.  I'm not aware of any MPI 
implementation (including Open MPI) that has support for FC types of transports 
(including FCoE).

I'll ping you off-list with some usNIC details.


> On Sep 1, 2016, at 10:06 PM, Lachlan Musicman  wrote:
> 
> Hola,
> 
> I'm new to MPI and OpenMPI. Relatively new to HPC as well.
> 
> I've just installed a SLURM cluster and added OpenMPI for the users to take 
> advantage of.
> 
> I'm just discovering that I have missed a vital part - the networking.
> 
> I'm looking over the networking options and from what I can tell we only have 
> (at the moment) Fibre Channel over Ethernet (FCoE).
> 
> Is this a network technology that's supported by OpenMPI?
> 
> (system is running Centos 7, on Cisco M Series hardware)
> 
> Please excuse me if I have terms wrong or am missing knowledge. Am new to 
> this.
> 
> cheers
> Lachlan
> 
> 
> --
> The most dangerous phrase in the language is, "We've always done it this way."
> 
> - Grace Hopper
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users


Re: [OMPI users] New to (Open)MPI

2016-09-01 Thread John Hearns via users
Hello Lachlan.  I think Jeff Squyres will be along in a short while! HE is
of course the expert on Cisco.

In the meantime a quick Google turns up:
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/usnic/c/deployment/2_0_X/b_Cisco_usNIC_Deployment_Guide_For_Standalone_C-SeriesServers.html

On 2 September 2016 at 06:54, Gilles Gouaillardet  wrote:

> Hi,
>
>
> FCoE is for storage, Ethernet is for the network.
>
> I assume you can ssh into your nodes, which means you have a TCP/IP, and
> it is up and running.
>
> i do not know the details of Cisco hardware, but you might be able to use
> usnic (native btl or via libfabric) instead of the plain TCP/IP network.
>
>
> at first, you can build Open MPI, and run a job on two nodes with one task
> per node.
>
> in your script, you can
>
> mpirun --mca btl_base_verbose 100 --mca pml_base_verbose 100 ...
>
> this will tell you which network is used.
>
>
> Cheers,
>
>
> Gilles
> On 9/2/2016 11:06 AM, Lachlan Musicman wrote:
>
> Hola,
>
> I'm new to MPI and OpenMPI. Relatively new to HPC as well.
>
> I've just installed a SLURM cluster and added OpenMPI for the users to
> take advantage of.
>
> I'm just discovering that I have missed a vital part - the networking.
>
> I'm looking over the networking options and from what I can tell we only
> have (at the moment) Fibre Channel over Ethernet (FCoE).
>
> Is this a network technology that's supported by OpenMPI?
>
> (system is running Centos 7, on Cisco M Series hardware)
>
> Please excuse me if I have terms wrong or am missing knowledge. Am new to
> this.
>
> cheers
> Lachlan
>
>
> --
> The most dangerous phrase in the language is, "We've always done it this
> way."
>
> - Grace Hopper
>
>
> ___
> users mailing 
> listus...@lists.open-mpi.orghttps://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] New to (Open)MPI

2016-09-01 Thread Gilles Gouaillardet

Hi,


FCoE is for storage, Ethernet is for the network.

I assume you can ssh into your nodes, which means you have a TCP/IP, and 
it is up and running.


i do not know the details of Cisco hardware, but you might be able to 
use usnic (native btl or via libfabric) instead of the plain TCP/IP network.



at first, you can build Open MPI, and run a job on two nodes with one 
task per node.


in your script, you can

mpirun --mca btl_base_verbose 100 --mca pml_base_verbose 100 ...

this will tell you which network is used.


Cheers,


Gilles

On 9/2/2016 11:06 AM, Lachlan Musicman wrote:

Hola,

I'm new to MPI and OpenMPI. Relatively new to HPC as well.

I've just installed a SLURM cluster and added OpenMPI for the users to 
take advantage of.


I'm just discovering that I have missed a vital part - the networking.

I'm looking over the networking options and from what I can tell we 
only have (at the moment) Fibre Channel over Ethernet (FCoE).


Is this a network technology that's supported by OpenMPI?

(system is running Centos 7, on Cisco M Series hardware)

Please excuse me if I have terms wrong or am missing knowledge. Am new 
to this.


cheers
Lachlan


--
The most dangerous phrase in the language is, "We've always done it 
this way."


- Grace Hopper


___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users


___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users