AM, Ralph Castain
>>>>>>>>>>>>>>> <r...@open-mpi.org> wrote:
>>>>>>>>>>>>>>>> It really is just that simple :-)
>>>>>>>>>>>>>>>>
>>>>>>&g
t;>>>> int main(int argc, char **args) {
>>>>>>>>>>>>>> int size;
>>>>>>>>>>>>>> MPI_Comm parent;
>>>>>>>>>>>>>> MPI_Init(, );
>>>>>>>>&g
3 > ./mySpawningExe
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> At this point, mySpawningExe will be the master, running on
>>>>>>>>>>>>>>> 19
>>>>>> }
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Any ideas? Thanks for any help.
>>>>>>>>>>>>
>>>>>>>>
;>>>>> On Aug 22, 2012, at 8:56 AM, Brian Budge <brian.bu...@gmail.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Okay. Is there a tutorial or FAQ for setting eve
gt;>>>>>>>> wrote:
>>>>>>>>>>>>>>> Hi Elena
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I'm copying this to the user list just to correct a
>>>
childExe on
>>>>>>>>>>> 192.168.0.11 and 192.168.0.12? Or childExe1 on 192.168.0.11 and
>>>>>>>>>>> childExe2 on 192.168.0.12?
>>>>>>>>>>>
>>>>>>>>>>> Thanks for the h
gt;>>>
>>>>>>>>>>> OMPI_MCA_orte_default_hostfile=
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Aug 21, 2012, at 7:23 PM, Brian Budge <brian.bu...@
>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Brian
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Jan 4, 2008 at 7:57 AM, Ralph Castain &l
hostfile by setting an
>>>>>>>>>>> environmental
>>>>>>>>>>> variable that pointed us to the hostfile.
>>>>>>>>>>>
>>>>>>>>>>> This is incorrect in the 1.2 code seri
t;>>>>>>>>> host.
>>>>>>>>>>
>>>>>>>>>> This situation has been corrected for the upcoming 1.3 code series.
>>>>>>>>>> For the
>>>>>>>>>> 1.2 series, t
t; straight in this old mind!
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 1/4/08 5:10 AM, "Elena Zhebel" <ezhe...@fugro-jason.com> wr
ot get it running...
>>>>>>>>>
>>>>>>>>> For the case
>>>>>>>>> mpirun -n 1 -hostfile my_hostfile -host my_master_host my_master.exe
>>>>>>>>> everything works.
>>>>>>>>>
2 max_slots=3
>>>>>>>> octocore01 slots=8 max_slots=8
>>>>>>>> octocore02 slots=8 max_slots=8
>>>>>>>> clstr000 slots=2 max_slots=3
>>>>>>>> clstr001 slots=2 max_slots=3
>>>>>>>> clstr0
max_slots=3
>>>>>>> - setenv OMPI_MCA_rds_hostfile_path my_hostfile (I put it in .tcshrc
>>>>>>> and
>>>>>>> then source .tcshrc)
>>>>>>> - in my_master.cpp I did
>>>>>>> MPI_Info info1;
>>>>&g
;
>>>>>> MPI_Info_set(info1, "host", hostname);
>>>>>>
>>>>>> _intercomm = intracomm.Spawn("./childexe", argv1, _nProc, info1, 0,
>>>>>> MPI_ERRCODES_IGNORE);
>>>>>>
>>>>>>
enstreek:21443] [0,0,0] ORTE_ERROR_LOG: Out of resource in file
>>> base/rmaps_base_support_fns.c at line 225
>>> [bollenstreek:21443] [0,0,0] ORTE_ERROR_LOG: Out of resource in file
>>> rmaps_rr.c at line 478
>>> [bollenstreek:21443] [0,0,0] ORTE_ERRO
f resource in file
>> base/rmaps_base_map_job.c at line 210
>> [bollenstreek:21443] [0,0,0] ORTE_ERROR_LOG: Out of resource in file
>> rmgr_urm.c at line 372
>> [bollenstreek:21443] [0,0,0] ORTE_ERROR_LOG: Out of resource in file
>> communicator/comm_dyn.c at line 608
>
8
>
> Did I miss something?
> Thanks for help!
>
> Elena
>
>
> -Original Message-
> From: Ralph H Castain [mailto:r...@lanl.gov]
> Sent: Tuesday, December 18, 2007 3:50 PM
> To: Elena Zhebel; Open MPI Users <us...@open-mpi.org>
> Cc: Ralph H Cas
Ralph H Castain [mailto:r...@lanl.gov]
> Sent: Monday, December 17, 2007 5:49 PM
> To: Open MPI Users <us...@open-mpi.org>; Elena Zhebel
> Cc: Ralph H Castain
> Subject: Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration
>
>
>
>
> On 12/17/0
above). This may become available in a future
release - TBD.
Hope that helps
Ralph
>
> Thanks and regards,
> Elena
>
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Ralph H Castain
> Sent: Monday, December
On 12/12/07 5:46 AM, "Elena Zhebel" wrote:
>
>
> Hello,
>
>
>
> I'm working on a MPI application where I'm using OpenMPI instead of MPICH.
>
> In my "master" program I call the function MPI::Intracomm::Spawn which spawns
> "slave" processes. It is not
Try using the info parameter in MPI::Intracomm::Spawn().
In this structure, you can say in which hosts you want to spawn.
Info parameters for MPI spawn:
http://www.mpi-forum.org/docs/mpi-20-html/node97.htm
2007/12/12, Elena Zhebel :
>
> Hello,
>
> I'm working on a MPI
Hello,
I'm working on a MPI application where I'm using OpenMPI instead of MPICH.
In my "master" program I call the function MPI::Intracomm::Spawn which spawns
"slave" processes. It is not clear for me how to spawn the "slave" processes
over the network. Currently "master" creates "slaves" on
24 matches
Mail list logo