AM, Klymak Jody wrote:
Hi Robert,
Sorry if this is offtopic for the more knowledgeable here...
On 14-Jul-09, at 7:50 PM, Robert Kubrick wrote:
By setting processor affinity you can force execution of each
process on a specific core, thus limiting context switching. I
know affinity wasn'
On Jul 14, 2009, at 9:03 PM, Klymak Jody wrote:
On 14-Jul-09, at 5:14 PM, Robert Kubrick wrote:
Jody,
Just to make sure, you did set processor affinity during your test
right?
I'm not sure what that means in the context of OS X.
By setting processor affinity you can force exec
Jody,
Just to make sure, you did set processor affinity during your test
right?
On Jul 13, 2009, at 9:28 PM, Klymak Jody wrote:
Hi Robert,
I got inspired by your question to run a few more tests. They are
crude, and I don't have actual cpu timing information because of a
library misma
The Open MPI FAQ recommends not to oversubscribe the available cores
for best performances, but is this still true? The new Nehalem
processors are built to run 2 threads on each core. On a 8 sockets
systems, that sums up to 128 threads that Intel claims can be run
without significant perfor
Feels like a deja-vu: http://www.linux-mag.com/cache/7407/1.html.
Doesn't MapReduce do what MPI has been doing for a lot longer?
Regardless of MPI, when sending C++ object over the network you have
to serialize their contents. The structures, or classes, have to be
coded to a stream of bytes, sent over the network, then recoded into
their complex object types by the receiving application. There is no
way to send obje
On Jul 4, 2009, at 8:24 AM, Jeff Squyres wrote:
On Jul 3, 2009, at 7:42 PM, Dorian Krause wrote:
I would discourage you to use the C++ bindings, since (to my
knowledge)
they might be removed from MPI 3.0 (there is such a proposal).
There is a proposal that has passed one vote so far to
changes on a time chart.
On May 5, 2009, at 2:47 PM, Jeff Squyres wrote:
On May 5, 2009, at 1:59 PM, Robert Kubrick wrote:
I am preparing a presentation where I will discuss commodity
interconnects and the evolution of Ethernet and Infiniband NICs. The
idea is to show the advance in network
Greetins,
I am preparing a presentation where I will discuss commodity
interconnects and the evolution of Ethernet and Infiniband NICs. The
idea is to show the advance in network interfaces speed over time on
a chart. So far I have collected the following *approximative* data
for Ethernet
How is this possible?
dx:~> mpirun -v -np 2 --mca btl self,sm --host dx,sx hostname
dx
sx
dx:~> netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-
OVR Flg
eth0 1500 0 998755 0 0 0 1070323 0
0 0 BMRU
eth1
Thanks,
george.
On Nov 8, 2008, at 12:14 PM, Robert Kubrick wrote:
I am having problems building OMPI 1.2.7 on an Intel Xeon quad-
core 64 bits server. The compilation completes but ompi_info
hangs after printing the OMPI version:
# ompi_info
1.2.7
I tried to run a few mpi application
8, 2008, at 12:14 PM, Robert Kubrick wrote:
I am having problems building OMPI 1.2.7 on an Intel Xeon quad-
core 64 bits server. The compilation completes but ompi_info hangs
after printing the OMPI version:
# ompi_info
1.2.7
I tried to run a few mpi applications on this same install and
th
I am having problems building OMPI 1.2.7 on an Intel Xeon quad-core
64 bits server. The compilation completes but ompi_info hangs after
printing the OMPI version:
# ompi_info
1.2.7
I tried to run a few mpi applications on this same install and they
do work fine. What can cause ompi_info to
e semi-heterogeneous (e.g., some have OFED installed,
some do not, etc.).
On Nov 6, 2008, at 1:00 AM, Robert Kubrick wrote:
According to this FAQ, one should be able to compile on a computer
and then run the OMPI program on different hardware, as far as the
c++ compiler and OMPI versions are
According to this FAQ, one should be able to compile on a computer
and then run the OMPI program on different hardware, as far as the c+
+ compiler and OMPI versions are the same: http://www.open-mpi.org/
faq/?category=sysadmin#new-openmpi-version
I have the following situation:
Server 1
Fab
I'm not sure how should I interpret this message:
[local:17344] *** An error occurred in MPI_Testsome
[local:17344] *** on communicator MPI COMMUNICATOR 5 CREATE FROM 0
[local:17344] *** MPI_ERR_TRUNCATE: message truncated
[local:17344] *** MPI_ERRORS_ARE_FATAL (goodbye)
mpiexec noticed that job
Recompile your own version of openmpi in a local directory, set your
PATH to your local openmpi install.
export PATH=/my/openmpi/install/include:/usr/local/include
mpicxx -show
On Sep 21, 2008, at 11:05 PM, Shafagh Jafer wrote:
I have tried this, but didn't help :-( could any one help pleas
The line
Signal code: Address not mapped (1)
indicates that there is probably a mismatch between the runtime
library and the linked version. Make sure that you link the program
and run it using the same installation base. Are the libraries in /
usr/mpi/fsl_openmpi_gcc_1.2.6 the same you use
I have a crash on a call to PMPI_Win_unlock(). My program runs with
openmpi 1.2.7 on Ubuntu.
Are there any known issues in 1.2.7 with RMA window calls?
Linux local 2.6.24-19-generic #1 SMP Wed Jun 18 14:43:41 UTC 2008
i686 GNU/Linux
[local:27767] *** Process received signal ***
[local:27767
I am trying to connect a client MPI app to a server with
MPI_Comm_connect. I get this error:
$ mpiexec -n 1 client 0.1.0:2000
Processor 0 (1193, Sender) initialized
Processor 0 connecting to '0.1.0:2000'
[local:01193] *** Process received signal ***
[local:01193] Signal: Bus error (10)
[local:0
Re: [OMPI users] MPI_Brecv vs multiple MPI_Irecv
>
> Robert Kubrick
>
> to:
>
> Open MPI Users
>
> 08/27/2008 08:51 AM
>
> Sent by:
>
> users-boun...@open-mpi.org
>
> Cc:
>
> mpich-discuss
>
> Please respond to Open MPI Users
>
> Interestin
s the definition of a buffered receive ?
george.
On Aug 26, 2008, at 10:17 PM, Robert Kubrick wrote:
From a performance point of view, which one is better:
MPI_Battach(10*sizeof(MSG))
MPI_Brecv()
or
MPI_recv_init()
MPI_recv_init()
MPI_recv_init()
... /* 10 recv handlers */
MPI_Start(all
From a performance point of view, which one is better:
MPI_Battach(10*sizeof(MSG))
MPI_Brecv()
or
MPI_recv_init()
MPI_recv_init()
MPI_recv_init()
... /* 10 recv handlers */
MPI_Start(all recv)
MPI_Waitany()
I understand MPI_Brecv will require an extra message copy, from the
attached buffer
On Aug 19, 2008, at 11:12 AM, Jitendra Kumar wrote:
George,
Thanks for your reply. However, I am still not able to resolve the
issue. I have been looking at one of your old post
http://www.open-mpi.org/community/lists/users/2005/08/0123.php
(I have tried to explain the issues below with snippet
A question related to an old thread:
in case of solution 2), how do you broadcast 'flags' to the slaves if
they're processing asynchronous data? I understand MPI_Bcast is a
collective operation requiring all processes in a communicator to
call it before it completes. If the slaves are proces
On Jul 30, 2008, at 11:12 AM, Mark Borgerding wrote:
I appreciate the suggestion about running a daemon on each of the
remote nodes, but wouldn't I kind of be reinventing the wheel
there? Process management is one of the things I'd like to be able
to count on ORTE for.
Would the following
Mark, if you can run a server process on the remote machine, you
could send a request from your local MPI app to your server, then use
an Intercomm to link the local process to the new remote process?
On Jul 30, 2008, at 9:55 AM, Mark Borgerding wrote:
I'm afraid I can't dictate to the cust
HDF5 supports parallel I/O through MPI-I/O. I've never used it, but I
think the API is easier than direct MPI-I/O, maybe even easier than
raw read/writes given its support for hierarchal objects and metadata.
HDF5 supports multiple storage models and it supports MPI-IO.
HDF5 has an open inter
What happened to the MPI/RT forum? The last standard 1.1 was issued
12/2001, the website is still active.
29 matches
Mail list logo