On 7/18/06 7:33 PM, "Warner Yuen" wrote:
> USING GCC 4.0.1 (build 5341) with and without Intel Fortran (build
> 9.1.027):
What version of Open MPI were you working with? If it was a developer/SVN
checkout, what version of the GNU Auto tools were you using?
> Config #2:
Dear All,
I was able to compile OpenMPI and create wrapper functions(like
mpicc,mpif77,etc) on top of GNU compilers.But when i tried it with Intel
fortran compiler(Since i need f90 compiler also),i met with some configuration
error(Hence i did'nt ger the Makefile).I am here with attching
Dear All,
I have been using openMPI for the last one month,so i need some clarification
regrding the following points.
1). What is the advantage of OpenMPI over MPICH2 and LAM/MPI.I mean to say is
there any difference in performace wise.
2). Any check pointing mechanism is there in OpenMPI
Hi,
shen T.T. wrote:
> Do you have the other compiler? Could you check the error and report it ?
I don't used other Intel Compilers at the moment, but I'm going to give
gfortran a try today.
Kind regards,
--
Frank Gruellich
HPC-Techniker
Tel.: +49 3722 528 42
Fax:+49 3722 528 15
Hi,
George Bosilca wrote:
> On the all-to-all collective the send and receive buffers has to be able
> to contain all the information you try to send. On this particular case,
> as you initialize the envio variable to a double I suppose it is defined
> as a double. If it's the case then the error
It is what I suspected. You can see that the envio array is smaller than
it should. It was created as an array of doubles with the size t_max, when
it should have been created as an array of double with the size t_max *
nprocs. If you look how the recibe array is created you can notice that
On 7/20/06 12:04 AM, "Jeff Squyres" wrote:
>> Config #2: ./configure --disable-shared --enable-static --with-rsh=/
>> usr/bin/ssh
>> Successful Build = NO
>> Error:
>> g++ -O3 -DNDEBUG -finline-functions -Wl,-u -Wl,_munmap -Wl,-
>> multiply_defined -Wl,suppress -o ompi_info
Could you re-send that? The attachment that I got was an excel spreadsheet
with the output from configure that did not show any errors -- it just
stopped in the middle of the check for "bool" in the C++ compiler.
Two notes:
1. One common mistake that people make is to use the "icc" compiler for
Hi,
Is MPI paradigm applicable to the cluster of regular networked machines.
That is, does the cost of network IO offset benefits of parallelization?
My guess is that this really depends on the application itself, however,
I'm wondering if you guys know of any success stories which involve MPI
Its doable, the scaling will not as good, because a network is a
network. If you are using just regular 100Mbit, you will not scale
as far as really good 1gig ethernet, but we are still talking about
tcp which incurs a penalty over networks like infiniband and myrinet.
Tcp is the largest
I think there are two questions here:
1. Running MPI applications on "slow" networks (e.g., 100mbps). This is
very much application-dependent. If your MPI app doesn't communication with
other processes much, then it probably won't matter. If you have
latency/bandwidth-sensitive applications,
Hi George,
George Bosilca wrote:
> It is what I suspected. You can see that the envio array is smaller than
> it should. It was created as an array of doubles with the size t_max, when
> it should have been created as an array of double with the size t_max *
> nprocs.
Ah, yes, I see (and even
What version of Open MPI are you using?
Can you run your application through a memory-checking debugger such as
Valgrind to see if it gives any more information about where the original
problem occurs?
On 7/17/06 10:14 PM, "Manal Helal" wrote:
> Hi
>
> after I finish
On 7/14/06 10:40 AM, "Michael Kluskens" wrote:
> I've looked through the documentation but I haven't found the
> discussion about what each BTL device is, for example, I have:
>
> MCA btl: self (MCA v1.0, API v1.0, Component v1.2)
This is the "loopback" Open MPI device. It
On 7/17/06 12:37 AM, "Mahesh Barve" wrote:
> Can anyone please enlighten us about what really
> happens in MPI_init() in openMPI?
This is quite a complicated question. :-)
> More specifically i am interested in knowing
> 1.Functions that needs to accomplished during
This looks great!
The only addition that I would ask for is some kind of executive summary at
the top so that developers can look at one small portion of data and see if
they need to look any further. One of the Big Goals of MTT was to avoid
overloading the developers w/ information by only
well, what might suffice is having one rather short email - like
executive summary you listed as #1. and then as well, include links that
would perform 3-4 standard queries which could include
query1:list all failures
query2:list all tests with short descriptions
query3: something 3
query4:
17 matches
Mail list logo