e versions. It can always be
> resurrected from SVN history if someone wants to pick up this effort again
> in the future.
>
>
> On Dec 6, 2012, at 11:07 AM, Damien wrote:
>
> > So far, I count three people interested in OpenMPI on Windows. That's
> not a case for on
Dear OpenMPI developers
I'd like to add my 2 cents that this would be a very desirable feature
enhancement for me as well (and perhaps others).
Best regards
Durga
On Tue, Aug 14, 2012 at 4:29 PM, Zbigniew Koza wrote:
> Hi,
>
> I've just found this information on nVidia's
This is just a thought:
according to the system() man page, 'SIGCHLD' is blocked during the
execution of the program. Since you are executing your command as a
daemon in the background, it will be permanently blocked.
Does OpenMPI daemon depend on SIGCHLD in any way? That is about the
only
I think this is a *great* topic for discussion, so let me throw some
fuel to the fire: the mechanism described in the blog (that makes
perfect sense) is fine for (N)UMA shared memory architectures. But
will it work for asymmetric architectures such as the Cell BE or
discrete GPUs where the data
Since /tmp is mounted across a network and /dev/shm is (always) local,
/dev/shm seems to be the right place for shared memory transactions.
If you create temporary files using mktemp is it being created in
/dev/shm or /tmp?
On Thu, Nov 3, 2011 at 11:50 AM, Bogdan Costescu
Any particular reason these calls don't nest? In some other HPC-like
paradigms (e.g. VSIPL) such calls are allowed to nest (i.e. only the
finalize() that matches the first init() will destroy allocated
resources.)
Just a curiosity question, doesn't really concern me in any particular way.
Best
Is there any provision/future plans to add OpenCL support as well?
CUDA is an Nvidia-only technology, so it might be a bit limiting in
some cases.
Best regards
Durga
On Thu, Oct 27, 2011 at 2:45 PM, Rolf vandeVaart wrote:
> Actually, that is not quite right. From the
If the mmap() pages are created with MAP_SHARED, then they should be
sharable with other processes in the same node, isn't it? MPI
processes are just like any other process, aren't they? Will one of
the MPI Gurus please comment?
Regards
Durga
On Mon, Oct 17, 2011 at 9:45 AM, Gabriele Fatigati
Is anything done at the kernel level portable (e.g. to Windows)? It
*can* be, in principle at least (by putting appropriate #ifdef's in
the code), but I am wondering if it is in reality.
Also, in 2005 there was an attempt to implement SSI (Single System
Image) functionality to the then-current
A follow-up question (and pardon if this sounds stupid) is this:
If I want to make my process multithreaded, BUT only one thread has
anything to do with MPI (for example, using OpenMP inside MPI), then
the results will be correct EVEN IF #1 or #2 of Eugene holds true. Is
this correct?
Thanks
I'd like to add to this question the following:
If I compile with --enable-heterogenous flag for different
*architectures* (I have a mix of old 32 bit x86, newer x86_64 and some
Cell BE based boxes (PS3)), would I be able to form a MPD ring between
all these different machines?
Best regards
I think the 'middle ground' approach can be simplified even further if
the data file is in a shared device (e.g. NFS/Samba mount) that can be
mounted at the same location of the file system tree on all nodes. I
have never tried it, though and mmap()'ing a non-POSIX compliant file
system such as
Is the data coming from a read-only file? In that case, a better way
might be to memory map that file in the root process and share the map
pointer in all the slave threads. This, like shared memory, will work
only for processes within a node, of course.
On Fri, Sep 24, 2010 at 3:46 AM, Andrei
a/PAPERS/scop3.pdf.
>
> george.
>
>
> On Nov 15, 2009, at 14:39 , Durga Choudhury wrote:
>
>> I apologize for dragging in this conversation in a different
>> direction, but I'd be very interested to know why the behavior with
>> the Playstation is different from ot
I apologize for dragging in this conversation in a different
direction, but I'd be very interested to know why the behavior with
the Playstation is different from other architectures. The PS3 box has
a single gigabit ethernet and no exapansion ports, so I'd assume it's
behavior would be no
in Open MPI. I attempted,
>> but no luck. Can you please tell how to write such programs in Open MPI.
>>
>> Thanks in advance.
>>
>> Regards,
>> On Thu, Jul 9, 2009 at 8:30 PM, Durga Choudhury <dpcho...@gmail.com> wrote:
>>>
>>> Although
The 'system' command will fork a separate process to run. If I
remember correctly, forking within MPI can lead to undefined behavior.
Can someone in OpenMPI development team clarify?
What I don't understand is: why is your TCP network so unstable that
you are worried about reachability? For MPI
Although I have perhaps the least experience on the topic in this
list, I will take a shot; more experienced people, please correct me:
MPI standards specify communication mechanism, not fault tolerance at
any level. You may achieve network tolerance at the IP level by
implementing 'equal cost
Josh
This actually is a concern addressed to all the authors/OpenMPI
contributors. The links to IEEExplore or ACM requires a subscription
which, unfortunately, not all the list subscribers have.
Would it be a copyright violation to post the actual paper/article to
the list instead of just a
You could use a separate namespace (if you are using C++) and define
your functions there...
Durga
On Wed, May 13, 2009 at 1:20 PM, Le Duy Khanh wrote:
> Dear,
>
> I intend to override some MPI functions such as MPI_Init, MPI_Recv... but I
> don't want to dig into OpenMPI
Automatically striping large messages across multiple NICs is certainly a
very nice feature; I was not aware that OpenMPI does this transparently. (I
wonder if other MPI implementations do this or not). However, I have the
following concern: Since the communication over an ethernet NIC is most
Did you export your variables? Otherwise the child shell that forks the MPI
process will not inherit it.
On 8/14/07, Rodrigo Faccioli wrote:
>
> Thanks, Tim Prins for your email.
>
> However It did't resolve my problem.
>
> I set the enviroment variable on my
s high potential of the resources
(ie, memory) ending up associated with a different processor than the
one the process gets pinned to. That isn't a big deal on Intel
machines, but is a major issue for AMD processors.
Just my $0.02, anyway.
Brian
On Nov 28, 2006, at 6:09 PM, Durga Choudhury
Jeff (and everybody else)
First of all, pardon me if this is a stupid comment; I am learning the
nuts-and-bolts of parallel programming; but my comment is as follows:
Why can't this be done *outside* openMPI, by calling Linux's processor
affinity APIs directly? I work with a blade server kind
Chev
Interesting question; I too would like to hear about it from the experts in
this forum. However, off the top of my head, I have the following advise for
you.
Yes, you could share the memory between processes using the shm_xxx system
calls of unix. However, it would be a lot easier if you
Calin
Your questions don't belong in this forum. You either need to be computer
literate (your questions are basic OS related) or delegate this task to
someone more experienced.
Good luck
Durga
On 11/3/06, calin pal wrote:
/*please read the mail and ans my
As an alternate suggestion (although George's is better, since this will
affect your entire network connectivity), you could override the default TCP
timeout values with the "sysctl -w" command.
The following three OIDs affect TCP timeout behavior under Linux:
net.ipv4.tcp_keepalive_intvl = 75
Very interesting, indeed! Message passing running over raw Ethernet using
cheap COTS PCs is indeed the need of the hours for people like me who has a
very shallow pocket. Great work! What would make this effort *really* cool
is to have a one-to-one mapping of APIs from MPI domain to GAMMA domain,
nsfers rate falls
down to ~130MB/s and after long run finally comes to ~54MB/s. Why this
type of network slowing down with time is happenning?
Regards,
Jayanta
On Mon, 23 Oct 2006, Durga Choudhury wrote:
> Did you try channel bonding? If your OS is Linux, there are plenty of
> "howto
Did you try channel bonding? If your OS is Linux, there are plenty of
"howto" on the internet which will tell you how to do it.
However, your CPU might be the bottleneck in this case. How much of CPU
horsepower is available at 140MB/s?
If the CPU *is* the bottleneck, changing your network
George
I knew that was the answer to Calin's question, but I still would like to
understand the issue:
by default, the openMPI installer installs the libraries in /usr/local/lib,
which is a standard location for the C compiler to look for libraries. So
*why* do I need to explicitly specify this
Hi all
I am getting an error (details follow) in the simplest of the possible test
scenarios:
Two identical regular Dell PCs connected back-to-back via an ethernet switch
on the 10/100 ethernet. Both run Fedora Core 4. Identical version (1.1) of
Open MPI is compiled and installed on both of
Hi All
We have been using the Argonne MPICH (over TCP/IP) on our in-house designed
embedded multicomputer for last several months with satisfactory results.
Our network technology is custom built and is * *not** infiniband (or any
published standards, such as Myrinet) based. This is due to the
Do you want to use MPI to chain a bunch of such laptops together (e.g. via
ethernet) or just for the cores to talk to each other? If the latter; you do
not need MPI. Your SMP operating system (e.g. Linux) will automatically
utilize both cores. The Linux 2.6 kernel also supports processor affinity
34 matches
Mail list logo