It sounds like, with the fault tolerance features specifically mentioned
by Vasiliy, MPI in its current form may not be the simplest choice.
On Tue, 2010-03-09 at 18:55 -0700, Ralph Castain wrote:
> Running an orted directly won't work - it is intended solely to be launched
> when running a job
On Mar 9, 2010, at 7:26 PM, Lasse Kliemann wrote:
> Thanks for your help, Ralph.
>
> --disable-pty-support indeed makes the problem go away.
>
> The system is self-made and non-standard. I have in my kernel
> config:
>
> CONFIG_UNIX98_PTYS=y
> # CONFIG_LEGACY_PTYS is not set
>
> Maybe
Thanks for your help, Ralph.
--disable-pty-support indeed makes the problem go away.
The system is self-made and non-standard. I have in my kernel
config:
CONFIG_UNIX98_PTYS=y
# CONFIG_LEGACY_PTYS is not set
Maybe legacy PTYs are required?
Moreover, the system uses UDEV (version 151). I
Running an orted directly won't work - it is intended solely to be launched
when running a job with "mpirun".
You application doesn't immediately sounds like it -needs- MPI, though you
could always use it anyway. The MPI messaging system is fast, but it isn't
clear if your application will
Okay, I dug thru the glibc 2.11 manual - there doesn't appear to be any problem
here in the code itself.
The problem instead, I believe, is caused by your system not supporting pty's,
yet you are trying to use them. In this case, tcgetattr will return errno 22
because the file descriptor is
Alas, I am by far no Glibc expert. I did a grep through the Glibc
changelog, but only found a reference to tcgetattr from 2006.
Of course, I would also like to see a real solution here instead
of ignoring the error condition.
* Message by -Ralph Castain- from Tue 2010-03-09:
> Ignoring an
Hello.
Some times ago i run study MPI (openmpi).
I need to write application (client/server) runs on 50 servers in
parallel. Each application can communicate with others by tcp/ip (send
commands, doing some parallel computations).
Master - controls all clients - slaves (send control commands, if
On Tue, Mar 09, 2010 at 05:43:02PM +0100, Ramon wrote:
> Am I the only one experiencing such problem? Is there any solution?
No, you are not the only one. Several others have mentioned the "busy
wait" problem.
The response on the OpenMPI developers, as I understand it, is that
the MPI job
Ignoring an error doesn't seem like a good idea. The real question is why we
are getting that error - it sounds like the newest Glibc release has changed
the API?? Can you send us the revised one so we can put in a test and use the
correct API for the installed version?
On Mar 9, 2010, at
Am I the only one experiencing such problem? Is there any solution? Or
shall I downgrade to LAM/MPI?
Regards
Ramon.
Ramon wrote:
Hi,
I've recently been trying to develop a client-server distributed file
system (for my thesis) using the MPI. The communication between the
machines is
$ mpirun -n 1 ls
--
mpirun was unable to launch the specified application as it encountered an
error:
Error: pipe function call failed when setting up I/O forwarding subsystem
Node: x.xx..xx
while
Using funneled will make your code more portable in the long run
as it is guaranteed by the MPI standard. Using single, i.e. MPI_Init,
is working now for typical OpenMP+MPI program that MPI calls are outside
OpenMP sections. But as MPI implementations implement more performance
optimized
Hi all,
I can understand the difference between SINGLE and FUNNELED, and why I
should use FUNNELED now. Thank you!
Yuanyuan
13 matches
Mail list logo