Thank you, Cristobal.
That is good news.
Gus Correa
Cristobal Navarro wrote:
i have good news.
after updating to a newer kernel on ubuntu server nodes, sm is not a
problem anymore for the nehalem CPUs!!!
my older kernel, was
Linux 2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010
i have good news.
after updating to a newer kernel on ubuntu server nodes, sm is not a problem
anymore for the nehalem CPUs!!!
my older kernel, was
Linux 2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64
GNU/Linux
and i upgraded to
Linux agua 2.6.32-24-server #39-Ubuntu SMP Wed
Cristobal Navarro wrote:
Gus
my kernel for all nodes is this one:
Linux 2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64
GNU/Linux
Kernel is not my league.
However, it would be great if somebody clarified
for good these issues with Nehalem/Westmere, HT,
shared memory and
Gus
my kernel for all nodes is this one:
Linux 2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64
GNU/Linux
at least for the moment i will use this configuration, at least for
deveplopment/testing of the parallel programs.
lag is minimum :)
whenever i get another kernel update,
Hi Cristobal
Please, read my answer (way down the message) below.
Cristobal Navarro wrote:
On Wed, Jul 28, 2010 at 3:28 PM, Gus Correa > wrote:
Hi Cristobal
Cristobal Navarro wrote:
On Wed, Jul 28, 2010 at 11:09
On Wed, Jul 28, 2010 at 3:28 PM, Gus Correa wrote:
> Hi Cristobal
>
> Cristobal Navarro wrote:
>
>>
>>
>> On Wed, Jul 28, 2010 at 11:09 AM, Gus Correa g...@ldeo.columbia.edu>> wrote:
>>
>>Hi Cristobal
>>
>>In case you are not using full
Hi Cristobal
Cristobal Navarro wrote:
On Wed, Jul 28, 2010 at 11:09 AM, Gus Correa > wrote:
Hi Cristobal
In case you are not using full path name for mpiexec/mpirun,
what does "which mpirun" say?
--> $which mpirun
to clear things,
i still can do a hello world on all 16 threads, but a few more repetitions
of the example and it kernel crashes :(
fcluster@agua:~$ mpirun --hostfile localhostfile -np 16 testMPI/hola
Process 0 on agua out of 16
Process 2 on agua out of 16
Process 14 on agua out of 16
Process 8
On Wed, Jul 28, 2010 at 11:09 AM, Gus Correa wrote:
> Hi Cristobal
>
> In case you are not using full path name for mpiexec/mpirun,
> what does "which mpirun" say?
>
--> $which mpirun
/opt/openmpi-1.4.2
>
> Often times this is a source of confusion, old versions
yes,
somehow after the second install, the installlation is consistent.
im only running into an issue, might be mpi im not sure.
these nodes, each one have 8 phisical procesors (2xIntel Xeon quad core),
and 16 virtual ones, btw i have ubuntu server 64bit 10.04 instaled on these
nodes.
the
This issue is usually caused by installing one version of Open MPI over an
older version:
http://www.open-mpi.org/faq/?category=building#install-overwrite
On Jul 27, 2010, at 10:35 PM, Cristobal Navarro wrote:
>
> On Tue, Jul 27, 2010 at 7:29 PM, Gus Correa
On Tue, Jul 27, 2010 at 7:29 PM, Gus Correa wrote:
> Hi Cristobal
>
> Does it run only on the head node alone?
> (Fuego? Agua? Acatenango?)
> Try to put only the head node on the hostfile and execute with mpiexec.
>
--> i will try only with the head node, and post results
i compiled with absolute path in case:
fcluster@agua:~$ /opt/openmpi-1.4.2/bin/mpicc testMPI/hello.c -o
testMPI/hola
fcluster@agua:~$ mpirun --hostfile myhostfile -np 5 testMPI/hola
[agua:03547] mca: base: component_find: unable to open
/opt/openmpi-1.4.2/lib/openmpi/mca_btl_ofud: perhaps a
Thanks Gus,
but i already had the paths
fcluster@agua:~$ echo $PATH
/opt/openmpi-1.4.2/bin:/opt/cfc/sge/bin/lx24-amd64:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
fcluster@agua:~$ echo $LD_LIBRARY_PATH
/opt/openmpi-1.4.2/lib:
fcluster@agua:~$
even weird, errors come
Hi Cristobal
Try using the --prefix option of mpiexec.
"man mpiexec" is your friend!
Alternatively, append the OpenMPI directories to your
PATH *and* LD_LIBRARY_PATH on your .bashrc/.csrhc file
See this FAQ:
http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path
I hope it helps,
Gus
15 matches
Mail list logo