Yes, I did.  I replaced the info argument of MPI_Comm_spawn with
MPI_INFO_NULL.

On Tue, Feb 3, 2015 at 5:54 PM, Ralph Castain <r...@open-mpi.org> wrote:

> When running your comm_spawn code, did you remove the Info key code? You
> wouldn't need to provide a hostfile or hosts any more, which is why it
> should resolve that problem.
>
> I agree that providing either hostfile or host as an Info key will cause
> the program to segfault - I'm woking on that issue.
>
>
> On Tue, Feb 3, 2015 at 3:46 PM, Evan Samanas <evan.sama...@gmail.com>
> wrote:
>
>> Setting these environment variables did indeed change the way mpirun maps
>> things, and I didn't have to specify a hostfile.  However, setting these
>> for my MPI_Comm_spawn code still resulted in the same segmentation fault.
>>
>> Evan
>>
>> On Tue, Feb 3, 2015 at 10:09 AM, Ralph Castain <r...@open-mpi.org> wrote:
>>
>>> If you add the following to your environment, you should run on multiple
>>> nodes:
>>>
>>> OMPI_MCA_rmaps_base_mapping_policy=node
>>> OMPI_MCA_orte_default_hostfile=<your hostfile>
>>>
>>> The first tells OMPI to map-by node. The second passes in your default
>>> hostfile so you don't need to specify it as an Info key.
>>>
>>> HTH
>>> Ralph
>>>
>>>
>>> On Tue, Feb 3, 2015 at 9:23 AM, Evan Samanas <evan.sama...@gmail.com>
>>> wrote:
>>>
>>>> Hi Ralph,
>>>>
>>>> Good to know you've reproduced it.  I was experiencing this using both
>>>> the hostfile and host key.  A simple comm_spawn was working for me as well,
>>>> but it was only launching locally, and I'm pretty sure each node only has 4
>>>> slots given past behavior (the mpirun -np 8 example I gave in my first
>>>> email launches on both hosts).  Is there a way to specify the hosts I want
>>>> to launch on without the hostfile or host key so I can test remote launch?
>>>>
>>>> And to the "hostname" response...no wonder it was hanging!  I just
>>>> constructed that as a basic example.  In my real use I'm launching
>>>> something that calls MPI_Init.
>>>>
>>>> Evan
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2015/02/26271.php
>>>>
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2015/02/26272.php
>>>
>>
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2015/02/26281.php
>>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/02/26285.php
>

Reply via email to