Re: [OMPI users] MPI_Reduce related questions

2016-11-10 Thread MM
On 27 October 2016 at 18:35, MM <finjulh...@gmail.com> wrote: > Hello, > > Given mpi nodes 0 N-1, 0 being root, master node. > and trying to determine the maximum value of a function over a large > range of values of its parameters, > > What are the differences betw

[OMPI users] MPI_Reduce related questions

2016-10-27 Thread MM
Hello, Given mpi nodes 0 N-1, 0 being root, master node. and trying to determine the maximum value of a function over a large range of values of its parameters, What are the differences between if any: 1. At node i: evaluate f for each of the values assigned to i of the parameters

Re: [OMPI users] mpirun works with cmd line call , but not with app context file arg

2016-10-16 Thread MM
o, do you have at least 4 cores on both A.lan and B.lan ? Yes both A and B have exactly 4 cores each. > > Cheers, > > Gilles > > > On Sunday, October 16, 2016, MM <finjulh...@gmail.com> wrote: >> >> Hi, >> >> openmpi 1.10.3 >> >> thi

[OMPI users] mpirun works with cmd line call , but not with app context file arg

2016-10-16 Thread MM
Hi, openmpi 1.10.3 this call: mpirun --hostfile ~/.mpihosts -H localhost -np 1 prog1 : -H A.lan -np 4 prog2 : -H B.lan -np 4 prog2 works, yet this one: mpirun --hostfile ~/.mpihosts --app ~/.mpiapp doesn't. where ~/.mpiapp -H localhost -np 1 prog1 -H A.lan -np 4 prog2 -H B.lan -np 4 prog2

[OMPI users] How to yield CPU more when not computing (was curious behavior during wait for broadcast: 100% cpu)

2016-10-16 Thread MM
I would like to see if there are any updates re this thread back from 2010: https://mail-archive.com/users@lists.open-mpi.org/msg15154.html I've got 3 boxes at home, a laptop and 2 other quadcore nodes . When the CPU is at 100% for a long time, the fans make quite some noise:-) The laptop runs

Re: [OMPI users] scatter/gather, tcp, 3 nodes, homogeneous, # RAM

2016-07-26 Thread MM
i/ sum(f_i) => n_i for each core. sum(n_i) = N 2. A 2nd stage could then be to ensure that no:n_i > m_i/M which would then involvetaking any excesses (n_i - m_i/M) and spreading it over the cores. Or perhaps both cpufrequencies and maxmem could be considered in 1 go, but I don't know how to do that? Thanks MM

[OMPI users] scatter/gather, tcp, 3 nodes, homogeneous, # RAM

2016-06-14 Thread MM
Hello, I have the following 3 1-socket nodes: node1: 4GB RAM 2-core: rank 0 rank 1 node2: 4GB RAM 4-core: rank 2 rank 3 rank 4 rank 5 node3: 8GB RAM 4-core: rank 6 rank 7 rank 8 rank 9 I have a model that takes a input and produces a output, and I want to run this model for N possible

[OMPI users] [slightly off topic] hardware solutions with monetary cost in mind

2016-05-20 Thread MM
, can the executables of my MPI program sendable over the wire before running them? If we exclude GPU or other nonMPI solutions, and cost being a primary factor, what is progression path from 2boxes to a cloud based solution (amazon and the like...) Regards, MM

[OMPI users] track progress of a mpi gather

2016-04-24 Thread MM
Hello, With a miniature case of 3 linux quadcore boxes, linked via 1Gbit Ethernet, I have a UI that runs on 1 of the 3 boxes, and that is the root of the communicator. I have a 1-second-running function on up to 10 parameters, my parameter space fits in the memory of the root, the space of it is

[OMPI users] latest stable and win7/msvc2013

2014-07-16 Thread MM
? rds, MM

Re: [OMPI users] [Boost-mpi] openmpi 1.6.2 boost 1.54 mswin7 vs2010 Threading support:No

2014-02-13 Thread MM
On 13 February 2014 15:33, Matthias Troyer wrote: > Hi, > > In orders to use MPI in a multi-threaded environment, even when only one > thread uses MPI, you need to request the necessary level of thread support > in the environment constructor. Then you'd an check whether

Re: [OMPI users] openmpi 1.6.2 boost 1.54 mswin7 vs2010 Threading support:No

2014-02-13 Thread MM
Apologies for the issue, I was getting output from the 2 processes, and their threads, and I was focused on only 1 process. Please ignore, On 13 February 2014 14:33, MM <finjulh...@gmail.com> wrote: > Hello, > > I am running a MPI application on a single host, with

[OMPI users] openmpi 1.6.2 boost 1.54 mswin7 vs2010 Threading support:No

2014-02-13 Thread MM
with msvc, and stepping into MPI_Isend (i don't have the sources for it). At that moment, suddenly a new thread is created, and a call to f() is made. This all sounds quite nightmarish. I understand I haven't presented any specific code to receive an accurate answer, but any help is appreciated. Regards, MM

Re: [OMPI users] globally unique 64bit unsigned integer (homogenous)

2014-01-03 Thread MM
MPI_Comm_size(MPI_COMM_WORLD, ); > unique += size; > > If this isn't insufficient, please ask to question differently. > > There is no canonical method for this. > > Jeff > > Sent from my iPhone > > On Jan 3, 2014, at 3:50 AM, MM <finjulh...@gmail.com> wrote:

[OMPI users] globally unique 64bit unsigned integer (homogenous)

2014-01-03 Thread MM
Hello, Is there a canonical way to obtain a globally unique 64bit unsigned integer across all mpi processes, multiple times? Thanks MM

Re: [OMPI users] (no subject)

2013-10-31 Thread MM
Of course, by this you mean, with the same total number of nodes, for e.g. 64 process on 1 node using shared mem, vs 64 processes spread over 2 nodes (32 each for e.g.)? On 29 October 2013 14:37, Ralph Castain wrote: > As someone previously noted, apps will always run slower

Re: [OMPI users] calculation progress status

2013-10-21 Thread MM
The loops 4 are not naturally in sync. Would you suggest to modify the loop to do a MPI_ISend after x iterations (for the clients) and MPI_IRecv on the root? Thanks MM

[OMPI users] calculation progress status

2013-10-21 Thread MM
like to report to the root process some progress indicator, ie 40% done so far and so on... What is the customary solution? Thanks MM

Re: [OMPI users] OpenMPI with cMake on Windows

2012-12-21 Thread MM
On 18 December 2012 22:04, Stephen Conley wrote: > Hello, > > ** ** > > I have installed CMake version 2.8.10.2 and OpenMPI version 1.6.2 on a 64 > bit Windows 7 computer. > > ** ** > > OpenMPI is installed in “C:\program files\OpenMPI” and the path has been > updated

Re: [OMPI users] localhost only

2012-01-23 Thread MM
travel and just got back, > so I'll take a look and see why we aren't doing so. perhaps this was a simple implementation? thanks MM

Re: [OMPI users] localhost only

2012-01-17 Thread MM
+ openmpi but single box shared-memory openmpi multiprocess is not necessarily worse than a single process multithread openmp. '-mca btl sm,self' indeed didn't work, Ralph, please let me know if testing required. MM -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open

Re: [OMPI users] feature requests: mpic++ to report both release and debug flags

2012-01-17 Thread MM
, MM From: Shiqing Fan [mailto:f...@hlrs.de] Sent: 17 January 2012 15:06 To: MM Cc: 'Open MPI Users'; Jeff Squyres Subject: Re: feature requests: mpic++ to report both release and debug flags Hi MM, Actually option 3 has already been implemented for Windows build, and it seems adequate

Re: [OMPI users] localhost only

2012-01-17 Thread MM
Even with a -host localhost ? Is there a way to change that? I have a long commute from work and I run 4 mpi processes on my quadcore laptop, and while commuting, there's no connection:-) MM From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain Sent

Re: [OMPI users] localhost only

2012-01-17 Thread MM
en MPI Users Subject: Re: [OMPI users] localhost only Have you tried to specify the hosts with something like this? mpirun -np 2 -host localhost ./my_program See 'man mpirun' for more details. I hope it helps, Gus Correa On Jan 16, 2012, at 6:34 PM, MM wrote: > hi, > > w

[OMPI users] localhost only

2012-01-16 Thread MM
nabled. . . . . . . . : No Ethernet adapter Wireless Network Connection: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Intel(R) WiFi Link 5100 AGN Physical Address. . . . . . . . . : rds, MM

[OMPI users] feature requests: mpic++ to report both release and debug flags

2012-01-16 Thread MM
distributions to do similarily It would be also useful to publish the cmake flags used by default to produce the win binaries I am available to test the packages if possible, also is there a wiki for requests or a similar system where I should file the above. MM Regards, From: Shiqing Fan

Re: [OMPI users] How to justify the use MPI codes on multicore systems/PCs?

2011-12-11 Thread MM
across the threads in the same process. I'd be curious to see some timing comparisons. MM From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of amjad ali Sent: 10 December 2011 20:22 To: Open MPI Users Subject: [OMPI users] How to justify the use MPI codes

Re: [OMPI users] orte_debugger_select and orte_ess_set_name failed

2011-11-29 Thread MM
fantastic, thank you very much, -Original Message- From: Shiqing Fan [mailto:f...@hlrs.de] Sent: 29 November 2011 14:10 To: MM Cc: 'Open MPI Users' Subject: Re: [OMPI users] orte_debugger_select and orte_ess_set_name failed Hi MM, That doesn't really help. Do you need a debug version

Re: [OMPI users] orte_debugger_select and orte_ess_set_name failed

2011-11-29 Thread MM
, I built opnempi static libs (with dll c/c++ runtime) OMPI_IMPORTS is __not__ defined, that's how I got it to compile MM -Original Message- From: Shiqing Fan [mailto:f...@hlrs.de] Sent: 25 November 2011 22:19 To: MM Subject: Re: [OMPI users] orte_debugger_select and orte_ess_set_name

Re: [OMPI users] orte_debugger_select and orte_ess_set_name failed

2011-11-29 Thread MM
Original Message- From: Shiqing Fan [mailto:f...@hlrs.de] Sent: 25 November 2011 22:19 To: MM Subject: Re: [OMPI users] orte_debugger_select and orte_ess_set_name failed Hi MM, Do you really want to build Open MPI by yourself? If you only need the libraries, probably you may stick to 1.5.4

Re: [OMPI users] open-mpi error

2011-11-24 Thread MM
and that may work MM -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Markus Stiller Sent: 24 November 2011 20:41 To: us...@open-mpi.org Subject: [OMPI users] open-mpi error Hello, i have some problem with mpi, i looked in the FAQ

Re: [OMPI users] orte_debugger_select and orte_ess_set_name failed

2011-11-23 Thread MM
Hi Shiqing, Is the info provided useful to understand what's going on? Alternatively, is there a way to get the provided binaries for win but off trunk rather than off 1.5.4 as on the website, because I don't have this problem when I link against those libs, Thanks MM -Original Message

Re: [OMPI users] orte_debugger_select and orte_ess_set_name failed

2011-11-21 Thread MM
of openmpi. but to be able to link against vs2010 Release libs of openmpi, I need them to be linked against the Release c runtime, so I might as well link against the debug version of the openmpi libs. Your help is very appreciated, MM -Original Message- From: Shiqing Fan [mailto:f...@hlrs.de

[OMPI users] vs2010: MPI_Address() unresolved

2011-11-18 Thread MM
somehow). I gather this MPI_Address() function resides in libmpi.lib and libmpid.lib PS: I didn't have these link errors when I built against the prebuilt win libraries from the website, what are the CMAke flags for those? Thanks, MM

[OMPI users] mpic++-wrapper-data.txt msvc10 Release/Debug 1.5.4

2011-11-18 Thread MM
that file for Release, build boost mpi, override for Debug, build for Debug. thanks, MM

Re: [OMPI users] mpirun should run with just the localhost interface on win?

2011-10-25 Thread MM
-Original Message- if the interface is down, should localhost still allow mpirun to run mpi processes?

[OMPI users] mpirun should run with just the localhost interface on win?

2011-10-08 Thread MM
on winxp, with the following net setup (just localhost, is it on?) C:\trunk-build-release>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : SOMEHOSTNAME Primary Dns Suffix . . . . . . . : DOMAIN.SOMECO.COM Node Type . . . . . . . . . . . . :

[OMPI users] building boost.mpi with openmpi: mpi.jam

2011-06-11 Thread MM
Hello, boost: 1.46.1 openmpi: 1.5.3 winxp : 64bit For openmpi mailing list users, boost comes with a boost.MPI library which is a C++-nativized library that wraps around any MPI-1 implementation available. Boost libraries can be built with bjam, a tool that is part of a build system. It comes