Understood - but I was wondering if that was true for OMPI as well.
On Jul 9, 2013, at 11:30 AM, "Daniels, Marcus G" wrote:
> The Intel MPI implementation does this. The performance between the
> accelerators and the host is poor though. About 20mb/sec in my ping/pong
>
The Intel MPI implementation does this. The performance between the
accelerators and the host is poor though. About 20mb/sec in my ping/pong
test. Intra-MIC communication is about a 1GB/sec whereas intra-host is about
6GB/sec. Latency is higher (i.e. worse) for the intra-MIC communication
Hi Tim
Quick question: can the procs on the MIC communicate with procs on (a) the
local host, (b) other hosts, and (c) MICs on other hosts?
The last two would depend on having direct access to one or more network
transports.
On Jul 9, 2013, at 10:18 AM, Tim Carlson
On Mon, 8 Jul 2013, Tim Carlson wrote:
Now that I have gone through this process, I'll report that it works with
the caveat that you can't use the openmpi wrappers for compiling. Recall
that the Phi card does not have either the GNU or Intel compilers
installed. While you could build up a
On Mon, 8 Jul 2013, Elken, Tom wrote:
My mistake on the OFED bits. The host I was installing on did not have all
of the MPSS software installed (my cluster admin node and not one of the
compute nodes). Adding the intel-mic-ofed-card RPM fixed the problem with
compiling the btl:openib bits
Hi Tim,
Well, in general and not on MIC I usually build the MPI stacks using the Intel
compiler set. Have you ran into s/w that requires GCC instead of Intel
compilers (beside Nvidia Cuda)? Did you try to use Intel compiler to produce
MIC native code (the OpenMPI stack for that matter)?
Hi Tim,
Well, in general and not on MIC I usually build the MPI stacks using the
Intel compiler set. Have you ran into s/w that requires GCC instead of
Intel compilers (beside Nvidia Cuda)? Did you try to use Intel compiler to
produce MIC native code (the OpenMPI stack for that matter)?
regards
On Mon, 8 Jul 2013, Elken, Tom wrote:
It isn't quite so easy.
Out of the box, there is no gcc on the Phi card. You can use the cross
compiler on the host, but you don't get gcc on the Phi by default.
See this post http://software.intel.com/en-us/forums/topic/382057
I really think you would
Thanks Tom, I will test it out...
regards
Michael
On Mon, Jul 8, 2013 at 1:16 PM, Elken, Tom wrote:
> ** **
>
> Thanks Tom, that sounds good. I will give it a try as soon as our Phi host
> here host gets installed.
>
> ** **
>
> I assume that all the prerequisite
Thanks Tom, that sounds good. I will give it a try as soon as our Phi host here
host gets installed.
I assume that all the prerequisite libs and bins on the Phi side are available
when we download the Phi s/w stack from Intel's site, right ?
[Tom]
Right. When you install Intel's MPSS
Thanks Tom, that sounds good. I will give it a try as soon as our Phi host
here host gets installed.
I assume that all the prerequisite libs and bins on the Phi side are
available when we download the Phi s/w stack from Intel's site, right ?
Cheers
Michael
On Mon, Jul 8, 2013 at 12:10 PM,
Do you guys have any plan to support Intel Phi in the future? That is, running
MPI code on the Phi cards or across the multicore and Phi, as Intel MPI does?
[Tom]
Hi Michael,
Because a Xeon Phi card acts a lot like a Linux host with an x86 architecture,
you can build your own Open MPI libraries
Thanks ...
Michael
On Mon, Jul 8, 2013 at 8:50 AM, Rolf vandeVaart wrote:
> With respect to the CUDA-aware support, Ralph is correct. The ability to
> send and receive GPU buffers is in the Open MPI 1.7 series. And
> incremental improvements will be added to the Open
With respect to the CUDA-aware support, Ralph is correct. The ability to send
and receive GPU buffers is in the Open MPI 1.7 series. And incremental
improvements will be added to the Open MPI 1.7 series. CUDA 5.0 is supported.
From: users-boun...@open-mpi.org
There was discussion of this on a prior email thread on the OMPI devel mailing
list:
http://www.open-mpi.org/community/lists/devel/2013/05/12354.php
On Jul 6, 2013, at 2:01 PM, Michael Thomadakis wrote:
> thanks,
>
> Do you guys have any plan to support Intel Phi
thanks,
Do you guys have any plan to support Intel Phi in the future? That is,
running MPI code on the Phi cards or across the multicore and Phi, as Intel
MPI does?
thanks...
Michael
On Sat, Jul 6, 2013 at 2:36 PM, Ralph Castain wrote:
> Rolf will have to answer the
Rolf will have to answer the question on level of support. The CUDA code is not
in the 1.6 series as it was developed after that series went "stable". It is in
the 1.7 series, although the level of support will likely be incrementally
increasing as that "feature" series continues to evolve.
Hello OpenMPI,
I am wondering what level of support is there for CUDA and GPUdirect on
OpenMPI 1.6.5 and 1.7.2.
I saw the ./configure --with-cuda=CUDA_DIR option in the FAQ. However, it
seems that with configure v1.6.5 it was ignored.
Can you identify GPU memory and send messages from it
18 matches
Mail list logo