Hi Jeff, Ralph,
first of all: thanks for your work on this!
On 3 July 2013 21:09, Jeff Squyres (jsquyres) wrote:
> 1. The root cause of the issue is that you are assigning a
> non-existent IP address to a name. I.e., maps to 127.0.1.1,
> but that IP address does not exist anywhere. Hence, OMP
Hi,
sorry for the delay in replying -- pretty busy week :-(
On 28 June 2013 21:54, Jeff Squyres (jsquyres) wrote:
> Here's what we think we know (I'm using the name "foo" instead of
> your actual hostname because it's easier to type):
>
> 1. When you run "hostname", you get foo.local back
Yes.
Hello,
On 26 June 2013 03:11, Ralph Castain wrote:
> I've been reviewing the code, and I think I'm getting a handle on
> the issue.
>
> Just to be clear - your hostname resolves to the 127 address? And you are on
> a Linux (not one of the BSD flavors out there)?
Yes (but resolves to 127.0.1.1 --
On 20 June 2013 11:29, Riccardo Murri wrote:
> However, I cannot reproduce the issue now
Just to be clear: the "issue" in that mail refers to the OpenMPI SGE
ras plugin not working with our version of SGE.
The issue with 127.0.1.1 addresses is reproducible at will.
Thanks,
Riccardo
On 19 June 2013 23:52, Reuti wrote:
> Am 19.06.2013 um 22:14 schrieb Riccardo Murri:
>
>> On 19 June 2013 20:42, Reuti wrote:
>>> Am 19.06.2013 um 19:43 schrieb Riccardo Murri :
>>>
>>>> On 19 June 2013 16:01, Ralph Castain wrote:
>>>>
On 20 June 2013 06:33, Ralph Castain wrote:
> Been trying to decipher this problem, and think maybe I'm beginning to
> understand it. Just to clarify:
>
> * when you execute "hostname", you get the .local response?
Yes:
[rmurri@nh64-2-11 ~]$ hostname
nh64-2-11.local
[rmurri@nh64-2-1
On 19 June 2013 20:42, Ralph Castain wrote:
> I'm assuming that the offending host has some other address besides
> just 127.0.1.1 as otherwise it couldn't connect to anything.
Yes, it has an IP on some 10.x.x.x network.
> I'm heading out the door for a couple of weeks, but can try to look at i
On 19 June 2013 20:42, Reuti wrote:
> Am 19.06.2013 um 19:43 schrieb Riccardo Murri :
>
>> On 19 June 2013 16:01, Ralph Castain wrote:
>>> How is OMPI picking up this hostfile? It isn't being specified on the cmd
>>> line - are you running under some resource
On 19 June 2013 16:01, Ralph Castain wrote:
> How is OMPI picking up this hostfile? It isn't being specified on the cmd
> line - are you running under some resource manager?
Via the environment variable `OMPI_MCA_orte_default_hostfile`.
We're running under SGE, but disable the OMPI/SGE integrat
cat $TMPDIR/machines
nh64-1-17
nh64-1-17
No problem if we modify the setup script to create the hostfile using
FQDNs instead. (`uname -n` returns the FQDN, not the unqualified host name.)
Thanks,
Riccardo
--
Riccardo Murri
http://www.gc3.uzh.ch/people/rm
Grid Computing Competence Centre
Un
On Mon, Dec 13, 2010 at 4:57 PM, Kechagias Apostolos
wrote:
> I have the code that is in the attachment.
> Can anybody explain how to use scatter function?
MPI_Scatter receives the data in the initial segment of the given
buffer. (The receiving buffer needs to be 1/Nth of the send buffer.)
So, i
Hi,
On Fri, Dec 10, 2010 at 2:51 AM, Santosh Ansumali wrote:
>> - the "static" data member is shared between all instances of the
>> class, so it cannot be part of the MPI datatype (it will likely be
>> at a fixed memory location);
>
> Yes! I agree that i is global as far as different instances
On Wed, Dec 8, 2010 at 10:04 PM, Santosh Ansumali wrote:
> I am confused with the use of MPI derived datatype for classes with
> static member. How to create derived datatype for something like
> class test{
> static const int i=5;
> double data[5];
> }
>
This looks like C++ code, and I think t
Hi Jeff,
thanks for the explanation - I should have read the MPI standard more carefully.
In the end, I traced the bug down to using standard send instead of
synchronous send,
so it had nothing to do with the receiving side at all.
Best regards,
Riccardo
Hello,
I'm trying to debug a segfaulting application; the segfault does not
happen consistently, however, so my guess is that it is due to some
memory corruption problem which I'm trying to find.
I'm using code like this:
MPI_Iprobe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &flag, &status);
gt; have these features!" (to be fair, they're somewhat obscure features).
>>
>> Other than not having the obvious "OMPI is MPI-2.2 compliant" checkmark
>> for marketing reasons, is there anyone who *needs* the functionality
>> represented by those still-o
On Tue, Aug 10, 2010 at 9:49 PM, Alexandru Blidaru wrote:
> Are the Boost.MPI send and recv functions as fast as the standard ones when
> using Open-MPI?
Boost.MPI is layered on top of plain MPI; it basically provides a
mapping from complex and user-defined C++ data types to MPI datatypes.
The ad
Hi Alexandru,
you can read all about Boost.MPI at:
http://www.boost.org/doc/libs/1_43_0/doc/html/mpi.html
On Mon, Aug 9, 2010 at 10:27 PM, Alexandru Blidaru wrote:
> I basically have to implement a 4D vector. An additional goal of my project
> is to support char, int, float and double dataty
Hello Alexandru,
On Mon, Aug 9, 2010 at 6:05 PM, Alexandru Blidaru wrote:
> I have to send some vectors from node to node, and the vecotrs are built
> using a template. The datatypes used in the template will be long, int,
> double, and char. How may I send those vectors since I wouldn't know wha
Hi Jack,
On Wed, Aug 4, 2010 at 6:25 AM, Jack Bryan wrote:
> I need to transfer some data, which is C++ class with some vector
> member data.
> I want to use MPI_Bcast(buffer, count, datatype, root, comm);
> May I use MPI_Datatype to define customized data structure that contain C++
> class ?
No
Hello,
The FAQ states: "Support for MPI_THREAD_MULTIPLE [...] has been
designed into Open MPI from its first planning meetings. Support for
MPI_THREAD_MULTIPLE is included in the first version of Open MPI, but
it is only lightly tested and likely still has some bugs."
The man page of "mpirun" fr
Hello,
I just re-compiled OMPI, and noticed this in the
"ompi_info --all" output:
Open MPI: 1.4.3a1r23323
...
Thread support: posix (mpi: yes, progress: no)
...
what is this "progress thread support"? Is it the "asynchronous
progress ...
Sorry, just found out about the "--debug-daemons" option, which
allowed me to google a meaningful error message and find the solution
in the archives of this list.
For the record, the problem was that the "orted" being launched on the
remote node is the one from the system-wide MPI install, not th
Hello,
On Tue, Jun 22, 2010 at 8:05 AM, Ralph Castain wrote:
> Sorry for the problem - the issue is a bug in the handling of the
>pernode option in 1.4.2. This has been fixed and awaits release in
>1.4.3.
>
Thank you for pointing this out. Unfortunately, I still am not able
to start remote proc
Hello,
I'm using OpenMPI 1.4.2 on a Rocks 5.2 cluster. I compiled it on my
own to have a thread-enabled MPI (the OMPI coming with Rocks 5.2
apparently only supports MPI_THREAD_SINGLE), and installed into ~/sw.
To test the newly installed library I compiled a simple "hello world"
that comes with
25 matches
Mail list logo