Thanks for looking into this!
I'm going to file a feature enhancement for OMPI to add this option once
the PGI debugger works with Open MPI (I don't want to add it before
then, because it may be misleading to users).
> -Original Message-
> From: users-boun...@open-mpi.org
>
Check out "Windows Compute Cluster Server 2003",
http://www.microsoft.com/windowsserver2003/ccs/default.mspx.
From the FAQ: "Windows Compute Cluster Server 2003 comes with the
Microsoft Message Passing Interface (MS MPI), an MPI stack based on the
MPICH2 implementation from Argonne National
As far as the nightly builds go, I'm still seeing what I believe to be
this problem in both r10670 and r10652. This is happening with
both Linux and OS X. Below are the systems and ompi_info for the
newest revision 10670.
As an example of the error, when running HPL with Myrinet I get the
Hey Justin,
Please provide us your mca parameters (if any), these could be in a
config file, environment variables or on the command line.
Thanks,
Galen
On Jul 6, 2006, at 9:22 AM, Justin Bronder wrote:
As far as the nightly builds go, I'm still seeing what I believe to be
this problem
Disregard the failure on Linux, a rebuild from scratch of HPL and OpenMPI
seems to have resolved the issue. At least I'm not getting the errors
during
the residual checks.
However, this is persisting under OS X.
Thanks,
Justin.
On 7/6/06, Justin Bronder wrote:
For OS
Good Day:
I am getting the following error messages every time I run a very simple
program that spawns child processes:
[turkana:27949] [0,0,0] ORTE_ERROR_LOG: Not found in file
base/soh_base_get_proc_soh.c at line 80
[turkana:27949] [0,0,0] ORTE_ERROR_LOG: Not found in file
Justin,
Is the OS X run showing the same residual failure?
- Galen
On Jul 6, 2006, at 10:49 AM, Justin Bronder wrote:
Disregard the failure on Linux, a rebuild from scratch of HPL and
OpenMPI
seems to have resolved the issue. At least I'm not getting the
errors during
the residual
Hi Saadat
Could you tell us something more about the system you are using? What type
of processors, operating system, any resource manager (e.g., SLURM, PBS),
etc?
Thanks
Ralph
On 7/6/06 10:49 AM, "s anwar" wrote:
> Good Day:
>
> I am getting the following error messages
With 1.0.3a1r10670 the same problem is occuring. Again the same configure
arguments
as before. For clarity, the Myrinet drive we are using is 2.0.21
node90:~/src/hpl/bin/ompi-xl-1.0.3 jbronder$ gm_board_info
GM build ID is "2.0.21_MacOSX_rc20050429075134PDT
Ralph:
I am using Fedora Core 4 (Linux turkana 2.6.12-1.1390_FC4smp #1 SMP Tue Jul
5 20:21:11 EDT 2005 i686 athlon i386 GNU/Linux). The machine is a dual
processor Athlon based machine. No, cluster resource manager, just an
rsh/ssh based setup.
Thanks.
Saadat.
On 7/6/06, Ralph H Castain
Ralph:
I am running the application without mpirun, i.e. ./foobar. So, according to
you definition of singleton above, I am calling comm_spawn from a singleton.
Thanks.
Saadat.
On 7/6/06, Ralph Castain wrote:
Thanks Saadat
Could you clarify how you are running this
On Jul 5, 2006, at 8:54 AM, Marcin Skoczylas wrote:
I saw some posts ago almost the same question as I have, but it didn't
give me satisfactional answer.
I have setup like this:
GUI program on some machine (f.e. laptop)
Head listening on tcpip socket for commands from GUI.
Workers waiting for
hi
I am trying to debug my mpi program, but printf debugging is not doing
much, and I need something that can show me variable values, and which
line of execution (and where it is called from), something like gdb with
mpi,
is there anything like that?
thank you very much for your help,
Manal
On Jul 6, 2006, at 8:27 PM, Manal Helal wrote:
I am trying to debug my mpi program, but printf debugging is not doing
much, and I need something that can show me variable values, and which
line of execution (and where it is called from), something like gdb
with
mpi,
is there anything like
14 matches
Mail list logo