Hi!
I’ve just learned about the water crisis and thought you would be interested to
check out this story:
https://waterforward.charitywater.org/et/sZpgm3Nn
Thanks,
Mudassar
--
Sent via WaterForward, an initiative of charity: water
WaterForward, 387 Tehama Street, San Francisco, CA
Come on http://joel-caserus.info/abc.news.php?abusiness=65b5
Dear MPI people,
I am working on a graph partitioning problem,
where we have an undirected graph of p MPI processes. The edges have weights
that show how much communication processes do among themselves. The cluster has
multiple nodes (each node with 8 cores)
Dear people,
Let say there are N MPI processes. Each MPI process has
to communicate with some T processes, where T < N. This information is a
directed graph (and every process knows only about its own). I need to convert
it to undirected graph, so that each process will
,
Mudassar
From: Jeff Squyres <jsquy...@cisco.com>
To: Open MPI Users <us...@open-mpi.org>
Cc: Mudassar Majeed <mudassar...@yahoo.com>
Sent: Friday, June 1, 2012 4:52 PM
Subject: Re: [OMPI users] Intra-node communication
...and exactly how
may be it is not installed on our supercomputing center. What do you suggest ?
best regards,
- Forwarded Message -
From: Mudassar Majeed <mudassar...@yahoo.com>
To: Jeff Squyres <jsquy...@cisco.com>
Sent: Friday, June 1, 2012 5:03 PM
Subject: Re: [OMPI
Dear MPI people,
Can someone tell me why MPI_Ssend takes more
time when two MPI processes are on same node .. ?? the same two processes
on different nodes take much less time for the same message exchange. I am
using a supercomputing center and this happens.
quy...@cisco.com>
To: Mudassar Majeed <mudassar...@yahoo.com>
Cc: Open MPI Users <us...@open-mpi.org>
Sent: Tuesday, May 22, 2012 1:58 PM
Subject: Re: [OMPI users] Need MPI algorithms, please help
On May 22, 2012, at 2:35 AM, Mudassar Majeed wrote:
> The algorithms can b
need to name atleast one so
that I can pursue.
best regards,
From: Jeff Squyres <jsquy...@cisco.com>
To: Mudassar Majeed <mudassar...@yahoo.com>; Open MPI Users
<us...@open-mpi.org>
Sent: Tuesday, May 22, 2012 1:45 AM
Subject: Re: [OM
Dear MPI people,
I need a set of algorithms for
calculating the same thing using different distributed (MPI) algorithms. The
algorithms may need the different data distribution and their execution times
are sensitive to the problem size, number of
Dear people,
I am using MPI over a supercomputing center. I don't have
access to install OpenMPI again with enabling valgrind. I need to check the
memory leaks in my application. How can I see in which line of my code of MPI
application there is memory leak ???
No, I am using MPI_Ssend and MPI_Recv everywhere.
regards,
Mudassar
From: Jeff Squyres <jsquy...@cisco.com>
To: Mudassar Majeed <mudassar...@yahoo.com>; Open MPI Users
<us...@open-mpi.org>
Cc: "anas.alt...@gmail.com" <anas.
Dear people,
In my MPI application, all the processes call the
MPI_Finalize (all processes reach there) but the rank 0 process could not
finish with MPI_Finalize and the application remains running. Please suggest
what can be the cause of that.
regards,
Mudassar
looking for it :(
thanks and regards,
Mudassar Majeed
PhD Student
Linkoping University
PhD Topic: Parallel Computing (Optimal composition of parallel programs and
runtime support).
From: Jeff Squyres <jsquy...@cisco.com>
To: mudassar...@yahoo.com; Op
What if two processes Pi and Pj send message to each other at the same time ?
Will both block in your suggested code ?
if not then I can go for that. BTW, I have tried that before.
regards,
From: Lukas Razik <li...@razik.name>
To: Mudassar
Dear people,
I have a scenario as shown below, please tell me if it
is possible or not
--
while(!IsDone)
{
// some code here
MPI_Irecv( .. );
// some code here
MPI_Iprobe( .,
I know about tnıs functıons, they special requirements like the mpi_irecv call
should be made in every process. My processes should not look for messages or
implicitly receive them. But messages shuddering go into their msg queues and
retrieved when needed. Just like udp communication.
ubject: Re: [OMPI users] Process Migration
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <811ffdfc-c3b6-4bf7-9e53-95c0b572f...@cisco.com>
Content-Type: text/plain; charset=us-ascii
On Nov 10, 2011, at 11:30 AM, Mudassar Majeed wrote:
> For example there are 10 nodes, and each no
be achieved (to
achieve balance in load and communications). I need your suggestions in these
regards,
thanks and best regards,
Mudassar
From: Josh Hursey <jjhur...@open-mpi.org>
To: Open MPI Users <us...@open-mpi.org>
Cc: Mudassar Majeed <mudass
. That's why I want to see if
it is possible to migrate a process from one core to another or not. Then I
will see how good my heuristic will be.
thanks
Mudassar
From: Jeff Squyres <jsquy...@cisco.com>
To: Mudassar Majeed <mudassar...@yahoo.com>; Op
Dear MPI community,
Please inform me if it is possible to
migrate MPI processes among the nodes or cores. By note I mean a machine having
multiple cores. So the cluster can have several nodes and each node can have
several cores. I want to know if it is
Dear MPI people,
I have a vector class with template as follows,
template
class Vector
It is a wrapper on the STL vector class. The element type is T that will be
replaced by the actual instantiated type on the runtime. I have not seen any
support in C++
Thank you for very useful tool for me.
regards,
Mudassar
From: Edgar Gabriel <gabr...@cs.uh.edu>
To: Mudassar Majeed <mudassar...@yahoo.com>; Open MPI Users <us...@open-mpi.org>
Sent: Thursday, October 27, 2011 6:20 PM
Subject: Re: [OMPI u
Dear MPI people,
I want to use LogGP model with MPI to find a
message with K bytes will take how much time. In this, I need to find Latency
L, Overhead o and Gap G. Can somebody tell me how can I measure these three
parameters of the underlying network ? and
the computation whether the
data is reached or not then it will operate on that data. Someone please inform
me how can I accomplish this.
regards,
Mudassar Majeed.
LinkedIn
Mudassar Majeed requested to add you as a connection on LinkedIn:
--
Mohan,
I'd like to add you to my professional network on LinkedIn.
- Mudassar
Accept invitation from Mudassar Majeed
http://www.linkedin.com/e/kq0fyp
From: Terry Dontje <terry.don...@oracle.com>
To: Jeff Squyres <jsquy...@cisco.com>
Cc: Mudassar Majeed <mudassar...@yahoo.com>; Open MPI Users <us...@open-mpi.org>
Sent: Saturday, July 16, 2011 5:25 AM
Subject: Re: [OMPI users] U
..!!
P14 >> I could reach here ...!!
P1 >> Received from P7, packet contains rank: 11
P1 >> I could reach here ...!!
P9 >> I could reach here ...!!
P2 >> Received from P11, packet contains rank: 13
P2 >> I could reach here ...!!
P0 >> I could reach
t was
displayed before sending the message. But on the receiver side the MPI_SOURCE
comes to be wrong.
This shows to me that messages on the receiving sides are captured on the basis
of MPI_ANY_SOURCE, that seems like it does not see the destination of message
while capturing it from message queue of the
I would start by either adding debugging
printf's to your code to trace the messages. Or narrowing down the
code to a small kernel such that you can prove to yourself that MPI is
working the way it should and if not you can show us where it is going
wrong.
--td
On 7/15/2011 6:51 AM, Mudassar Ma
e and when you receive a message the
status.MPI_SOURCE will contain the rank of the actual sender not the
receiver's rank. If you are not seeing that then there is a bug somewhere.
--td
On 7/14/2011 9:54 PM, Mudassar Majeed wrote:
> Friend,
> I can not specify the rank of the sender. Bec
,
Mudassar
From: Jeff Squyres <jsquy...@cisco.com>
To: Mudassar Majeed <mudassar...@yahoo.com>
Cc: Open MPI Users <us...@open-mpi.org>
Sent: Friday, July 15, 2011 3:30 AM
Subject: Re: [OMPI users] Urgent Question regarding, MPI_ANY_SOURCE.
Rig
com>
To: Mudassar Majeed <mudassar...@yahoo.com>; Open MPI Users <us...@open-mpi.org>
Sent: Friday, July 15, 2011 1:58 AM
Subject: Re: [OMPI users] Urgent Question regarding, MPI_ANY_SOURCE.
When you use MPI_ANY_SOURCE in a receive, the rank of the actual sender is
33 matches
Mail list logo