Re: [OMPI users] OMPI 4.1 in Cygwin packages?

2021-02-08 Thread Martín Morales via users
Hi Marco,

Apologies for my delay. I tried 4.1.0 and it worked!!
Thank you very much for your assistance. Kind regards,

Martín

From: Marco Atzeri<mailto:marco.atz...@gmail.com>
Sent: sábado, 6 de febrero de 2021 08:54
To: Martín Morales<mailto:martineduardomora...@hotmail.com>; Open MPI 
Users<mailto:users@lists.open-mpi.org>
Subject: Re: [OMPI users] OMPI 4.1 in Cygwin packages?

Martin,

what is the IP address of the machine you can not connect ?

All those VMware interfaces look suspicious, anyway.


In the mean time I uploaded 4.1.0-1 for X86_64,
you can try to see if solve the issue.

the i686 version in still in build phase


On 05.02.2021 20:46, Martín Morales wrote:
> Hi Marcos,
>
> Pasted below the output.
>
> Thank you. Regards,
>
> Martín

>
> /internal_name:  {A6301D34-A586-4439-B7A7-69FA905CA167}/
>
> /flags: AF_INET6 up running multicast/
>
> /address:   fe80::e5c6:c83:8653:3cd8%14/
>
> /friendly_name: VMware Network Adapter VMnet1/
>
> //
>
> /internal_name:  {A6301D34-A586-4439-B7A7-69FA905CA167}/
>
> /flags: AF_INET  up broadcast running multicast/
>
> /address:   192.168.148.1/
>
> /friendly_name: VMware Network Adapter VMnet1/
>



Re: [OMPI users] OMPI 4.1 in Cygwin packages?

2021-02-06 Thread Marco Atzeri via users

Martin,

what is the IP address of the machine you can not connect ?

All those VMware interfaces look suspicious, anyway.


In the mean time I uploaded 4.1.0-1 for X86_64,
you can try to see if solve the issue.

the i686 version in still in build phase


On 05.02.2021 20:46, Martín Morales wrote:

Hi Marcos,

Pasted below the output.

Thank you. Regards,

Martín




/internal_name:  {A6301D34-A586-4439-B7A7-69FA905CA167}/

/flags: AF_INET6 up running multicast/

/address:   fe80::e5c6:c83:8653:3cd8%14/

/friendly_name: VMware Network Adapter VMnet1/

//

/internal_name:  {A6301D34-A586-4439-B7A7-69FA905CA167}/

/flags: AF_INET  up broadcast running multicast/

/address:   192.168.148.1/

/friendly_name: VMware Network Adapter VMnet1/



Re: [OMPI users] OMPI 4.1 in Cygwin packages?

2021-02-05 Thread Martín Morales via users
Hi Marcos,

Pasted below the output.
Thank you. Regards,

Martín


internal_name:  {C3C1A65B-A775-4604-A187-C1FDC48EC211}
flags: AF_INET6 up running multicast
address:   fe80::c4a3:827f:bd3c:141%16
friendly_name: Ethernet

internal_name:  {C3C1A65B-A775-4604-A187-C1FDC48EC211}
flags: AF_INET  up broadcast running multicast
address:   192.168.56.1
friendly_name: Ethernet

internal_name:  {8F915CBF-9C68-4DFA-9AE1-FF3207DA0CC9}
flags: AF_INET6 up multicast
address:   fe80::f84e:e8e9:b2e7:7f23%9
friendly_name: Local Area Connection* 1

internal_name:  {8F915CBF-9C68-4DFA-9AE1-FF3207DA0CC9}
flags: AF_INET  up broadcast multicast
address:   169.254.127.35
friendly_name: Local Area Connection* 1

internal_name:  {9C9C1DB9-1E8B-4893-A83F-C881060ED6DF}
flags: AF_INET6 up multicast
address:   fe80::3c7c:e0b3:af76:3a0%11
friendly_name: Local Area Connection* 2

internal_name:  {9C9C1DB9-1E8B-4893-A83F-C881060ED6DF}
flags: AF_INET  up broadcast multicast
address:   169.254.3.160
friendly_name: Local Area Connection* 2

internal_name:  {A6301D34-A586-4439-B7A7-69FA905CA167}
flags: AF_INET6 up running multicast
address:   fe80::e5c6:c83:8653:3cd8%14
friendly_name: VMware Network Adapter VMnet1

internal_name:  {A6301D34-A586-4439-B7A7-69FA905CA167}
flags: AF_INET  up broadcast running multicast
address:   192.168.148.1
friendly_name: VMware Network Adapter VMnet1

internal_name:  {B259A286-0A90-429D-97A1-D4CEAA97EA42}
flags: AF_INET6 up running multicast
address:   fe80::555d:6f18:486e:376e%15
friendly_name: VMware Network Adapter VMnet8

internal_name:  {B259A286-0A90-429D-97A1-D4CEAA97EA42}
flags: AF_INET  up broadcast running multicast
address:   192.168.200.1
friendly_name: VMware Network Adapter VMnet8

internal_name:  {1BA832E1-BEA7-4E7B-8FFA-9BBDCBA170A6}
flags: AF_INET6 up running multicast
address:   fe80::8038:114f:63e0:81a3%5
friendly_name: Wi-Fi

internal_name:  {1BA832E1-BEA7-4E7B-8FFA-9BBDCBA170A6}
flags: AF_INET  up broadcast running multicast
address:   192.168.100.45
friendly_name: Wi-Fi

internal_name:  {144D1ED1-0EE5-47E1-82A7-7E4ABB8DB2D8}
flags: AF_INET6 up multicast
address:   fe80::c4f8:6145:cc59:4c4b%3
friendly_name: Bluetooth Network Connection

internal_name:  {144D1ED1-0EE5-47E1-82A7-7E4ABB8DB2D8}
flags: AF_INET  up broadcast multicast
address:   169.254.76.75
friendly_name: Bluetooth Network Connection

internal_name:  {EB43FC56-BCC2-11EA-A07A-806E6F6E6963}
flags: AF_INET6 up loopback running multicast
address:   ::1
friendly_name: Loopback Pseudo-Interface 1

internal_name:  {EB43FC56-BCC2-11EA-A07A-806E6F6E6963}
flags: AF_INET  up loopback running multicast
address:   127.0.0.1
friendly_name: Loopback Pseudo-Interface 1

internal_name:  {A04CB0D8-879C-418C-8BB7-209EEADBDCD0}
flags: AF_INET  up broadcast multicast
address:   0.0.0.0
friendly_name: Ethernet (Kernel Debugger)

internal_name:  {D9635C22-48DE-4359-99BA-057A3850FA03}
flags: AF_INET  up broadcast multicast
address:   192.168.56.1
friendly_name: VirtualBox Host-Only Network


From: Marco Atzeri via users<mailto:users@lists.open-mpi.org>
Sent: viernes, 5 de febrero de 2021 13:37
To: users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>
Cc: Marco Atzeri<mailto:marco.atz...@gmail.com>
Subject: Re: [OMPI users] OMPI 4.1 in Cygwin packages?

On 05.02.2021 16:18, Martín Morales via users wrote:
> Hi Gilles,
>
> I tried but it hangs indefinitely and without any output.
>
> Regards,
>
> Martín
>

Hi Martin,

can you run get-interface available on

http://matzeri.altervista.org/works/interface/

so we can see how Cygwin identify all your network interface ?

Regards
Marco



Re: [OMPI users] OMPI 4.1 in Cygwin packages?

2021-02-05 Thread Marco Atzeri via users

On 05.02.2021 16:18, Martín Morales via users wrote:

Hi Gilles,

I tried but it hangs indefinitely and without any output.

Regards,

Martín



Hi Martin,

can you run get-interface available on

http://matzeri.altervista.org/works/interface/

so we can see how Cygwin identify all your network interface ?

Regards
Marco



Re: [OMPI users] OMPI 4.1 in Cygwin packages?

2021-02-05 Thread Martín Morales via users
Hi Gilles,

I tried but it hangs indefinitely and without any output.
Regards,

Martín

From: Gilles Gouaillardet via users<mailto:users@lists.open-mpi.org>
Sent: jueves, 4 de febrero de 2021 23:48
To: Open MPI Users<mailto:users@lists.open-mpi.org>
Cc: Gilles Gouaillardet<mailto:gilles.gouaillar...@gmail.com>
Subject: Re: [OMPI users] OMPI 4.1 in Cygwin packages?

Martin,

at first glance, I could not spot the root cause.

That being said, the second note is sometimes referred as
"WinDev2021Eval" in the logs, but it is also referred as "worker".

What if you use the real names in your hostfile: DESKTOP-C0G4680 and
WinDev2021Eval instead of master and worker?

Cheers,

Gilles

On Fri, Feb 5, 2021 at 5:59 AM Martín Morales via users
 wrote:
>
> Hello all,
>
>
>
> Gilles, unfortunately, the result is the same. Attached the log you ask me.
>
>
>
> Jeff, some time ago I tried with OMPI 4.1.0 (Linux) and it worked.
>
>
>
> Thank you both. Regards,
>
>
>
> Martín
>
>
>
> From: Jeff Squyres (jsquyres) via users
> Sent: jueves, 4 de febrero de 2021 16:10
> To: Open MPI User's List
> Cc: Jeff Squyres (jsquyres)
> Subject: Re: [OMPI users] OMPI 4.1 in Cygwin packages?
>
>
>
> Do we know if this was definitely fixed in v4.1.x?
>
>
> > On Feb 4, 2021, at 7:46 AM, Gilles Gouaillardet via users 
> >  wrote:
> >
> > Martin,
> >
> > this is a connectivity issue reported by the btl/tcp component.
> >
> > You can try restricting the IP interface to a subnet known to work
> > (and with no firewall) between both hosts
> >
> > mpirun --mca btl_tcp_if_include 192.168.0.0/24 ...
> >
> > If the error persists, you can
> >
> > mpirun --mca btl_tcp_base_verbose 20 ...
> >
> > and then compress and post the logs so we can have a look
> >
> >
> > Cheers,
> >
> > Gilles
> >
> > On Thu, Feb 4, 2021 at 9:33 PM Martín Morales via users
> >  wrote:
> >>
> >> Hi Marcos,
> >>
> >>
> >>
> >> Yes, I have a problem with spawning to a “worker” host (on localhost, 
> >> works). There are just two machines: “master” and “worker”.  I’m using 
> >> Windows 10 in both with same Cygwin and packages. Pasted below some 
> >> details.
> >>
> >> Thanks for your help. Regards,
> >>
> >>
> >>
> >> Martín
> >>
> >>
> >>
> >> 
> >>
> >>
> >>
> >> Running:
> >>
> >>
> >>
> >> mpirun -np 1 -hostfile ./hostfile ./spawner.exe 8
> >>
> >>
> >>
> >> hostfile:
> >>
> >>
> >>
> >> master slots=5
> >>
> >> worker slots=5
> >>
> >>
> >>
> >> Error:
> >>
> >>
> >>
> >> At least one pair of MPI processes are unable to reach each other for
> >>
> >> MPI communications.  This means that no Open MPI device has indicated
> >>
> >> that it can be used to communicate between these processes.  This is
> >>
> >> an error; Open MPI requires that all MPI processes be able to reach
> >>
> >> each other.  This error can sometimes be the result of forgetting to
> >>
> >> specify the "self" BTL.
> >>
> >>
> >>
> >> Process 1 ([[31598,1],0]) is on host: DESKTOP-C0G4680
> >>
> >> Process 2 ([[31598,2],2]) is on host: worker
> >>
> >> BTLs attempted: self tcp
> >>
> >>
> >>
> >> Your MPI job is now going to abort; sorry.
> >>
> >> --
> >>
> >> [DESKTOP-C0G4680:02828] [[31598,1],0] ORTE_ERROR_LOG: Unreachable in file 
> >> /pub/devel/openmpi/v4.0/openmpi-4.0.5-1.x86_64/src/openmpi-4.0.5/ompi/dpm/dpm.c
> >>  at line 493
> >>
> >> [DESKTOP-C0G4680:02828] *** An error occurred in MPI_Comm_spawn
> >>
> >> [DESKTOP-C0G4680:02828] *** reported by process [2070806529,0]
> >>
> >> [DESKTOP-C0G4680:02828] *** on communicator MPI_COMM_SELF
> >>
> >> [DESKTOP-C0G4680:02828] *** MPI_ERR_INTERN: internal error
> >>
> >> [DESKTOP-C0G4680:02828] *** MPI_ERRORS_ARE_FATAL (processes in this 
> >> communicator will now abort,
> >>
> >> [DESKTOP-C0G4680:02828] ***and poten

Re: [OMPI users] OMPI 4.1 in Cygwin packages?

2021-02-04 Thread Gilles Gouaillardet via users
Martin,

at first glance, I could not spot the root cause.

That being said, the second note is sometimes referred as
"WinDev2021Eval" in the logs, but it is also referred as "worker".

What if you use the real names in your hostfile: DESKTOP-C0G4680 and
WinDev2021Eval instead of master and worker?

Cheers,

Gilles

On Fri, Feb 5, 2021 at 5:59 AM Martín Morales via users
 wrote:
>
> Hello all,
>
>
>
> Gilles, unfortunately, the result is the same. Attached the log you ask me.
>
>
>
> Jeff, some time ago I tried with OMPI 4.1.0 (Linux) and it worked.
>
>
>
> Thank you both. Regards,
>
>
>
> Martín
>
>
>
> From: Jeff Squyres (jsquyres) via users
> Sent: jueves, 4 de febrero de 2021 16:10
> To: Open MPI User's List
> Cc: Jeff Squyres (jsquyres)
> Subject: Re: [OMPI users] OMPI 4.1 in Cygwin packages?
>
>
>
> Do we know if this was definitely fixed in v4.1.x?
>
>
> > On Feb 4, 2021, at 7:46 AM, Gilles Gouaillardet via users 
> >  wrote:
> >
> > Martin,
> >
> > this is a connectivity issue reported by the btl/tcp component.
> >
> > You can try restricting the IP interface to a subnet known to work
> > (and with no firewall) between both hosts
> >
> > mpirun --mca btl_tcp_if_include 192.168.0.0/24 ...
> >
> > If the error persists, you can
> >
> > mpirun --mca btl_tcp_base_verbose 20 ...
> >
> > and then compress and post the logs so we can have a look
> >
> >
> > Cheers,
> >
> > Gilles
> >
> > On Thu, Feb 4, 2021 at 9:33 PM Martín Morales via users
> >  wrote:
> >>
> >> Hi Marcos,
> >>
> >>
> >>
> >> Yes, I have a problem with spawning to a “worker” host (on localhost, 
> >> works). There are just two machines: “master” and “worker”.  I’m using 
> >> Windows 10 in both with same Cygwin and packages. Pasted below some 
> >> details.
> >>
> >> Thanks for your help. Regards,
> >>
> >>
> >>
> >> Martín
> >>
> >>
> >>
> >> 
> >>
> >>
> >>
> >> Running:
> >>
> >>
> >>
> >> mpirun -np 1 -hostfile ./hostfile ./spawner.exe 8
> >>
> >>
> >>
> >> hostfile:
> >>
> >>
> >>
> >> master slots=5
> >>
> >> worker slots=5
> >>
> >>
> >>
> >> Error:
> >>
> >>
> >>
> >> At least one pair of MPI processes are unable to reach each other for
> >>
> >> MPI communications.  This means that no Open MPI device has indicated
> >>
> >> that it can be used to communicate between these processes.  This is
> >>
> >> an error; Open MPI requires that all MPI processes be able to reach
> >>
> >> each other.  This error can sometimes be the result of forgetting to
> >>
> >> specify the "self" BTL.
> >>
> >>
> >>
> >> Process 1 ([[31598,1],0]) is on host: DESKTOP-C0G4680
> >>
> >> Process 2 ([[31598,2],2]) is on host: worker
> >>
> >> BTLs attempted: self tcp
> >>
> >>
> >>
> >> Your MPI job is now going to abort; sorry.
> >>
> >> --
> >>
> >> [DESKTOP-C0G4680:02828] [[31598,1],0] ORTE_ERROR_LOG: Unreachable in file 
> >> /pub/devel/openmpi/v4.0/openmpi-4.0.5-1.x86_64/src/openmpi-4.0.5/ompi/dpm/dpm.c
> >>  at line 493
> >>
> >> [DESKTOP-C0G4680:02828] *** An error occurred in MPI_Comm_spawn
> >>
> >> [DESKTOP-C0G4680:02828] *** reported by process [2070806529,0]
> >>
> >> [DESKTOP-C0G4680:02828] *** on communicator MPI_COMM_SELF
> >>
> >> [DESKTOP-C0G4680:02828] *** MPI_ERR_INTERN: internal error
> >>
> >> [DESKTOP-C0G4680:02828] *** MPI_ERRORS_ARE_FATAL (processes in this 
> >> communicator will now abort,
> >>
> >> [DESKTOP-C0G4680:02828] ***and potentially your MPI job)
> >>
> >>
> >>
> >> USER_SSH@DESKTOP-C0G4680 ~
> >>
> >> $ [WinDev2012Eval:00120] [[31598,2],2] ORTE_ERROR_LOG: Unreachable in file 
> >> /pub/devel/openmpi/v4.0/openmpi-4.0.5-1.x86_64/src/openmpi-4.0.5/ompi/dpm/dpm.c
> >>  at line 493
> >>
> >> [WinDev2012Eval:00121] [[31598,2],3] ORTE_ERROR_LOG: Unreachable in file 
> >> /pub/devel/openmpi/v4.0/openmpi-4.0.5-1.x86_64

Re: [OMPI users] OMPI 4.1 in Cygwin packages?

2021-02-04 Thread Martín Morales via users
Hello all,

Gilles, unfortunately, the result is the same. Attached the log you ask me.

Jeff, some time ago I tried with OMPI 4.1.0 (Linux) and it worked.

Thank you both. Regards,

Martín

From: Jeff Squyres (jsquyres) via users<mailto:users@lists.open-mpi.org>
Sent: jueves, 4 de febrero de 2021 16:10
To: Open MPI User's List<mailto:users@lists.open-mpi.org>
Cc: Jeff Squyres (jsquyres)<mailto:jsquy...@cisco.com>
Subject: Re: [OMPI users] OMPI 4.1 in Cygwin packages?

Do we know if this was definitely fixed in v4.1.x?


> On Feb 4, 2021, at 7:46 AM, Gilles Gouaillardet via users 
>  wrote:
>
> Martin,
>
> this is a connectivity issue reported by the btl/tcp component.
>
> You can try restricting the IP interface to a subnet known to work
> (and with no firewall) between both hosts
>
> mpirun --mca btl_tcp_if_include 192.168.0.0/24 ...
>
> If the error persists, you can
>
> mpirun --mca btl_tcp_base_verbose 20 ...
>
> and then compress and post the logs so we can have a look
>
>
> Cheers,
>
> Gilles
>
> On Thu, Feb 4, 2021 at 9:33 PM Martín Morales via users
>  wrote:
>>
>> Hi Marcos,
>>
>>
>>
>> Yes, I have a problem with spawning to a “worker” host (on localhost, 
>> works). There are just two machines: “master” and “worker”.  I’m using 
>> Windows 10 in both with same Cygwin and packages. Pasted below some details.
>>
>> Thanks for your help. Regards,
>>
>>
>>
>> Martín
>>
>>
>>
>> 
>>
>>
>>
>> Running:
>>
>>
>>
>> mpirun -np 1 -hostfile ./hostfile ./spawner.exe 8
>>
>>
>>
>> hostfile:
>>
>>
>>
>> master slots=5
>>
>> worker slots=5
>>
>>
>>
>> Error:
>>
>>
>>
>> At least one pair of MPI processes are unable to reach each other for
>>
>> MPI communications.  This means that no Open MPI device has indicated
>>
>> that it can be used to communicate between these processes.  This is
>>
>> an error; Open MPI requires that all MPI processes be able to reach
>>
>> each other.  This error can sometimes be the result of forgetting to
>>
>> specify the "self" BTL.
>>
>>
>>
>> Process 1 ([[31598,1],0]) is on host: DESKTOP-C0G4680
>>
>> Process 2 ([[31598,2],2]) is on host: worker
>>
>> BTLs attempted: self tcp
>>
>>
>>
>> Your MPI job is now going to abort; sorry.
>>
>> --
>>
>> [DESKTOP-C0G4680:02828] [[31598,1],0] ORTE_ERROR_LOG: Unreachable in file 
>> /pub/devel/openmpi/v4.0/openmpi-4.0.5-1.x86_64/src/openmpi-4.0.5/ompi/dpm/dpm.c
>>  at line 493
>>
>> [DESKTOP-C0G4680:02828] *** An error occurred in MPI_Comm_spawn
>>
>> [DESKTOP-C0G4680:02828] *** reported by process [2070806529,0]
>>
>> [DESKTOP-C0G4680:02828] *** on communicator MPI_COMM_SELF
>>
>> [DESKTOP-C0G4680:02828] *** MPI_ERR_INTERN: internal error
>>
>> [DESKTOP-C0G4680:02828] *** MPI_ERRORS_ARE_FATAL (processes in this 
>> communicator will now abort,
>>
>> [DESKTOP-C0G4680:02828] ***and potentially your MPI job)
>>
>>
>>
>> USER_SSH@DESKTOP-C0G4680 ~
>>
>> $ [WinDev2012Eval:00120] [[31598,2],2] ORTE_ERROR_LOG: Unreachable in file 
>> /pub/devel/openmpi/v4.0/openmpi-4.0.5-1.x86_64/src/openmpi-4.0.5/ompi/dpm/dpm.c
>>  at line 493
>>
>> [WinDev2012Eval:00121] [[31598,2],3] ORTE_ERROR_LOG: Unreachable in file 
>> /pub/devel/openmpi/v4.0/openmpi-4.0.5-1.x86_64/src/openmpi-4.0.5/ompi/dpm/dpm.c
>>  at line 493
>>
>> --
>>
>> It looks like MPI_INIT failed for some reason; your parallel process is
>>
>> likely to abort.  There are many reasons that a parallel process can
>>
>> fail during MPI_INIT; some of which are due to configuration or environment
>>
>> problems.  This failure appears to be an internal failure; here's some
>>
>> additional information (which may only be relevant to an Open MPI
>>
>> developer):
>>
>>
>>
>> ompi_dpm_dyn_init() failed
>>
>> --> Returned "Unreachable" (-12) instead of "Success" (0)
>>
>> --
>>
>> [WinDev2012Eval:00121] *** An error occurred in MPI_Init
>>
>> [WinDev2012Eval:00121] *** reported by process 
>> [15289389101093879810,12884901

Re: [OMPI users] OMPI 4.1 in Cygwin packages?

2021-02-04 Thread Jeff Squyres (jsquyres) via users
message 
>> help-mca-bml-r2.txt / unreachable proc
>> 
>> [DESKTOP-C0G4680:02831] Set MCA parameter "orte_base_help_aggregate" to 0 to 
>> see all help / error messages
>> 
>> [DESKTOP-C0G4680:02831] 1 more process has sent help message 
>> help-mpi-runtime.txt / mpi_init:startup:internal-failure
>> 
>> [DESKTOP-C0G4680:02831] 1 more process has sent help message 
>> help-mpi-errors.txt / mpi_errors_are_fatal unknown handle
>> 
>> 
>> 
>> Script spawner:
>> 
>> 
>> 
>> #include "mpi.h"
>> 
>> #include 
>> 
>> #include 
>> 
>> #include 
>> 
>> 
>> 
>> int main(int argc, char ** argv){
>> 
>>int processesToRun;
>> 
>>MPI_Comm intercomm;
>> 
>>MPI_Info info;
>> 
>> 
>> 
>>   if(argc < 2 ){
>> 
>>  printf("Processes number needed!\n");
>> 
>>  return 0;
>> 
>>       }
>> 
>>   processesToRun = atoi(argv[1]);
>> 
>>MPI_Init( NULL, NULL );
>> 
>>   printf("Spawning from parent:...\n");
>> 
>>   MPI_Comm_spawn( "./spawned.exe", MPI_ARGV_NULL, processesToRun, 
>> MPI_INFO_NULL, 0, MPI_COMM_SELF, , MPI_ERRCODES_IGNORE);
>> 
>> 
>> 
>>MPI_Finalize();
>> 
>>return 0;
>> 
>> }
>> 
>> 
>> 
>> Script spawned:
>> 
>> 
>> 
>> #include "mpi.h"
>> 
>> #include 
>> 
>> #include 
>> 
>> 
>> 
>> int main(int argc, char ** argv){
>> 
>>int hostName_len,rank, size;
>> 
>>MPI_Comm parentcomm;
>> 
>>char hostName[200];
>> 
>> 
>> 
>>MPI_Init( NULL, NULL );
>> 
>>MPI_Comm_get_parent(  );
>> 
>>MPI_Get_processor_name(hostName, _len);
>> 
>>MPI_Comm_rank(MPI_COMM_WORLD, );
>> 
>>MPI_Comm_size(MPI_COMM_WORLD, );
>> 
>> 
>> 
>>if (parentcomm != MPI_COMM_NULL) {
>> 
>> printf("I'm the spawned h: %s  r/s: %i/%i\n", hostName, rank, size );
>> 
>>}
>> 
>> 
>> 
>>MPI_Finalize();
>> 
>>return 0;
>> 
>> }
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> From: Marco Atzeri via users
>> Sent: miércoles, 3 de febrero de 2021 17:58
>> To: users@lists.open-mpi.org
>> Cc: Marco Atzeri
>> Subject: Re: [OMPI users] OMPI 4.1 in Cygwin packages?
>> 
>> 
>> 
>> On 03.02.2021 21:35, Martín Morales via users wrote:
>>> Hello,
>>> 
>>> I would like to know if any OMPI 4.1.* is going to be available in the
>>> Cygwin packages.
>>> 
>>> Thanks and regards,
>>> 
>>> Martín
>>> 
>> 
>> Hi Martin,
>> anything in it that is abolutely needed short term ?
>> 
>> Any problem with current 4.0.5 package ?
>> 
>> 
>> Usually it is very time consuming the build
>> and I am busy with other cygwin stuff
>> 
>> Regards
>> Marco
>> 
>> 


-- 
Jeff Squyres
jsquy...@cisco.com



Re: [OMPI users] OMPI 4.1 in Cygwin packages?

2021-02-04 Thread Gilles Gouaillardet via users
needed!\n");
>
>   return 0;
>
>}
>
>processesToRun = atoi(argv[1]);
>
> MPI_Init( NULL, NULL );
>
>printf("Spawning from parent:...\n");
>
>MPI_Comm_spawn( "./spawned.exe", MPI_ARGV_NULL, processesToRun, 
> MPI_INFO_NULL, 0, MPI_COMM_SELF, , MPI_ERRCODES_IGNORE);
>
>
>
> MPI_Finalize();
>
> return 0;
>
> }
>
>
>
> Script spawned:
>
>
>
> #include "mpi.h"
>
> #include 
>
> #include 
>
>
>
> int main(int argc, char ** argv){
>
> int hostName_len,rank, size;
>
> MPI_Comm parentcomm;
>
> char hostName[200];
>
>
>
> MPI_Init( NULL, NULL );
>
> MPI_Comm_get_parent(  );
>
> MPI_Get_processor_name(hostName, _len);
>
> MPI_Comm_rank(MPI_COMM_WORLD, );
>
> MPI_Comm_size(MPI_COMM_WORLD, );
>
>
>
> if (parentcomm != MPI_COMM_NULL) {
>
> printf("I'm the spawned h: %s  r/s: %i/%i\n", hostName, rank, size );
>
> }
>
>
>
> MPI_Finalize();
>
> return 0;
>
> }
>
>
>
>
>
>
>
>
>
> From: Marco Atzeri via users
> Sent: miércoles, 3 de febrero de 2021 17:58
> To: users@lists.open-mpi.org
> Cc: Marco Atzeri
> Subject: Re: [OMPI users] OMPI 4.1 in Cygwin packages?
>
>
>
> On 03.02.2021 21:35, Martín Morales via users wrote:
> > Hello,
> >
> > I would like to know if any OMPI 4.1.* is going to be available in the
> > Cygwin packages.
> >
> > Thanks and regards,
> >
> > Martín
> >
>
> Hi Martin,
> anything in it that is abolutely needed short term ?
>
> Any problem with current 4.0.5 package ?
>
>
> Usually it is very time consuming the build
> and I am busy with other cygwin stuff
>
> Regards
> Marco
>
>


Re: [OMPI users] OMPI 4.1 in Cygwin packages?

2021-02-04 Thread Martín Morales via users
Hi Marcos,

Yes, I have a problem with spawning to a “worker” host (on localhost, works). 
There are just two machines: “master” and “worker”.  I’m using Windows 10 in 
both with same Cygwin and packages. Pasted below some details.
Thanks for your help. Regards,

Martín



Running:

mpirun -np 1 -hostfile ./hostfile ./spawner.exe 8

hostfile:

master slots=5
worker slots=5

Error:

At least one pair of MPI processes are unable to reach each other for
MPI communications.  This means that no Open MPI device has indicated
that it can be used to communicate between these processes.  This is
an error; Open MPI requires that all MPI processes be able to reach
each other.  This error can sometimes be the result of forgetting to
specify the "self" BTL.

Process 1 ([[31598,1],0]) is on host: DESKTOP-C0G4680
Process 2 ([[31598,2],2]) is on host: worker
BTLs attempted: self tcp

Your MPI job is now going to abort; sorry.
--
[DESKTOP-C0G4680:02828] [[31598,1],0] ORTE_ERROR_LOG: Unreachable in file 
/pub/devel/openmpi/v4.0/openmpi-4.0.5-1.x86_64/src/openmpi-4.0.5/ompi/dpm/dpm.c 
at line 493
[DESKTOP-C0G4680:02828] *** An error occurred in MPI_Comm_spawn
[DESKTOP-C0G4680:02828] *** reported by process [2070806529,0]
[DESKTOP-C0G4680:02828] *** on communicator MPI_COMM_SELF
[DESKTOP-C0G4680:02828] *** MPI_ERR_INTERN: internal error
[DESKTOP-C0G4680:02828] *** MPI_ERRORS_ARE_FATAL (processes in this 
communicator will now abort,
[DESKTOP-C0G4680:02828] ***and potentially your MPI job)

USER_SSH@DESKTOP-C0G4680 ~
$ [WinDev2012Eval:00120] [[31598,2],2] ORTE_ERROR_LOG: Unreachable in file 
/pub/devel/openmpi/v4.0/openmpi-4.0.5-1.x86_64/src/openmpi-4.0.5/ompi/dpm/dpm.c 
at line 493
[WinDev2012Eval:00121] [[31598,2],3] ORTE_ERROR_LOG: Unreachable in file 
/pub/devel/openmpi/v4.0/openmpi-4.0.5-1.x86_64/src/openmpi-4.0.5/ompi/dpm/dpm.c 
at line 493
--
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

ompi_dpm_dyn_init() failed
--> Returned "Unreachable" (-12) instead of "Success" (0)
--
[WinDev2012Eval:00121] *** An error occurred in MPI_Init
[WinDev2012Eval:00121] *** reported by process 
[15289389101093879810,12884901891]
[WinDev2012Eval:00121] *** on a NULL communicator
[WinDev2012Eval:00121] *** Unknown error
[WinDev2012Eval:00121] *** MPI_ERRORS_ARE_FATAL (processes in this communicator 
will now abort,
[WinDev2012Eval:00121] ***and potentially your MPI job)
[DESKTOP-C0G4680:02831] 2 more processes have sent help message 
help-mca-bml-r2.txt / unreachable proc
[DESKTOP-C0G4680:02831] Set MCA parameter "orte_base_help_aggregate" to 0 to 
see all help / error messages
[DESKTOP-C0G4680:02831] 1 more process has sent help message 
help-mpi-runtime.txt / mpi_init:startup:internal-failure
[DESKTOP-C0G4680:02831] 1 more process has sent help message 
help-mpi-errors.txt / mpi_errors_are_fatal unknown handle

Script spawner:

#include "mpi.h"
#include 
#include 
#include 

int main(int argc, char ** argv){
int processesToRun;
MPI_Comm intercomm;
MPI_Info info;

   if(argc < 2 ){
  printf("Processes number needed!\n");
  return 0;
   }
   processesToRun = atoi(argv[1]);
MPI_Init( NULL, NULL );
   printf("Spawning from parent:...\n");
   MPI_Comm_spawn( "./spawned.exe", MPI_ARGV_NULL, processesToRun, 
MPI_INFO_NULL, 0, MPI_COMM_SELF, , MPI_ERRCODES_IGNORE);

MPI_Finalize();
return 0;
}

Script spawned:

#include "mpi.h"
#include 
#include 

int main(int argc, char ** argv){
int hostName_len,rank, size;
MPI_Comm parentcomm;
char hostName[200];

MPI_Init( NULL, NULL );
MPI_Comm_get_parent(  );
MPI_Get_processor_name(hostName, _len);
MPI_Comm_rank(MPI_COMM_WORLD, );
MPI_Comm_size(MPI_COMM_WORLD, );

if (parentcomm != MPI_COMM_NULL) {
printf("I'm the spawned h: %s  r/s: %i/%i\n", hostName, rank, size );
}

MPI_Finalize();
return 0;
}




From: Marco Atzeri via users<mailto:users@lists.open-mpi.org>
Sent: miércoles, 3 de febrero de 2021 17:58
To: users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>
Cc: Marco Atzeri<mailto:marco.atz...@gmail.com>
Subject: Re: [OMPI users] OMPI 4.1 in Cygwin packages?

On 03.02.2021 21:35, Martín Morales via users wrote:
> Hello,
>
> I would like to know if any OMPI 4.1.* is going to 

Re: [OMPI users] OMPI 4.1 in Cygwin packages?

2021-02-03 Thread Marco Atzeri via users

On 03.02.2021 21:35, Martín Morales via users wrote:

Hello,

I would like to know if any OMPI 4.1.* is going to be available in the 
Cygwin packages.


Thanks and regards,

Martín



Hi Martin,
anything in it that is abolutely needed short term ?

Any problem with current 4.0.5 package ?


Usually it is very time consuming the build
and I am busy with other cygwin stuff

Regards
Marco


[OMPI users] OMPI 4.1 in Cygwin packages?

2021-02-03 Thread Martín Morales via users

Hello,

I would like to know if any OMPI 4.1.* is going to be available in the Cygwin 
packages.
Thanks and regards,

Martín