On Apr 7, 2014, at 6:47 PM, "Blosch, Edwin L" wrote:
> Sorry for the confusion. I am not building OpenMPI from the SVN source. I
> downloaded the 1.8 tarball and did configure, and that is what failed. I was
> surprised that it didn't work on a vanilla Redhat Enterprise Linux 6, out of
> th
Sorry for the confusion. I am not building OpenMPI from the SVN source. I
downloaded the 1.8 tarball and did configure, and that is what failed. I was
surprised that it didn't work on a vanilla Redhat Enterprise Linux 6, out of
the box operating system installation.
The error message sugg
Per Dave's comment: note that running autogen.pl (or autogen.sh -- they're sym
links to the same thing) is *only* necessary for SVN/hg/git checkouts of Open
MPI.
You should *not* run autogen.pl in an expanded Open MPI tarball unless you
really know what you're doing (e.g., you made a change t
Open MPI 1.4.3 is *ancient*. Please upgrade -- we just released Open MPI 1.8
last week.
Also, please look at this FAQ entry -- it steps you through a lot of basic
troubleshooting steps about getting basic MPI programs working.
http://www.open-mpi.org/faq/?category=running#diagnose-multi-host
On Apr 7, 2014, at 2:35 PM, Blosch, Edwin L wrote:
> That worked!
>
> But still a mystery.
>
> I tried printing the environment immediately before mpirun. Inside the
> Python wrapper, I do os.system('env') immediately before the
> subprocess.pOpen( ['mpirun', ..., shell=False ] ) command.
That worked!
But still a mystery.
I tried printing the environment immediately before mpirun. Inside the Python
wrapper, I do os.system('env') immediately before the subprocess.pOpen(
['mpirun', ..., shell=False ] ) command. This returns SHELL=/bin/csh, and I
can confirm that getpwuid, if it
I doubt that the rsh launcher is getting confused by the cmd you show below.
However, if that command is embedded in a script that changes the shell away
from your default shell, then yes - it might get confused. When the rsh
launcher spawns your remote orted, it attempts to set some envars to e
Thanks Noam, that makes sense.
Yes, I did mean to do ". hello" (with space in between). That was an attempt
to replicate whatever OpenMPI is doing.
In the first post I mentioned that my mpirun command actually gets executed
from within a Python script using the subprocess module. I don't kn
The permission denied looks like it is being issued against
'/bin/.'
What do you get if you grep your own username from /etc/passwd? That is,
% grep Edwin /etc/passwd
If your shell is listed as /bin/csh, then you need to use csh's
syntax, which would be
% source hello
(which will also work f
Am 07.04.2014 um 22:36 schrieb Blosch, Edwin L:
> I guess this is not OpenMPI related anymore. I can repeat the essential
> problem interactively:
>
> % echo $SHELL
> /bin/csh
>
> % echo $SHLVL
> 1
>
> % cat hello
> echo Hello
>
> % /bin/bash hello
> Hello
>
> % /bin/csh hello
> Hello
>
>
On Mar 30, 2014, at 2:43 PM, W Spector wrote:
> The mpi.mod file that is created from both the openmpi-1.7.4 and
> openmpi-1.8rc1 tarballs does not seem to be generating interface blocks for
> the Fortran API - whether the calls use choice buffers or not.
Can you be a bit more specific -- are
On Apr 7, 2014, at 4:36 PM, Blosch, Edwin L wrote:
> I guess this is not OpenMPI related anymore. I can repeat the essential
> problem interactively:
>
> % echo $SHELL
> /bin/csh
>
> % echo $SHLVL
> 1
>
> % cat hello
> echo Hello
>
> % /bin/bash hello
> Hello
>
> % /bin/csh hello
> Hello
I guess this is not OpenMPI related anymore. I can repeat the essential
problem interactively:
% echo $SHELL
/bin/csh
% echo $SHLVL
1
% cat hello
echo Hello
% /bin/bash hello
Hello
% /bin/csh hello
Hello
% . hello
/bin/.: Permission denied
I think I need to hope the administrator can fix
Am 07.04.2014 um 22:04 schrieb Blosch, Edwin L:
> I am submitting a job for execution under SGE. My default shell is /bin/csh.
Where - in SGE or on the interactive command line you get?
> The script that is submitted has #!/bin/bash at the top. The script runs on
> the 1st node allocated to
If I create a program called hello which just contains the line "echo hello",
then I do
"/bin/. hello" then I get permission denied.
Is that what you mean?
I might be lost in esoteric corners of Linux. What is "." under /bin ? There
is no program there by that name. I've heard of "." as a
Looks to me like the problem is here:
/bin/.: Permission denied.
Appears you don't have permission to exec bash??
On Apr 7, 2014, at 1:04 PM, Blosch, Edwin L wrote:
> I am submitting a job for execution under SGE. My default shell is /bin/csh.
> The script that is submitted has #!/bin/bash
Ok, got it. Thanks.
On Apr 7, 2014, at 4:04 PM, Hamid Saeed wrote:
> Thanks for the reply.
>
> no.
> In my case the problem was with the misunderstanding of our network
> administrator.
> Our network system should have, up to 1023 ports locked but some one else put
> a ticket on 1024 too.
I am submitting a job for execution under SGE. My default shell is /bin/csh.
The script that is submitted has #!/bin/bash at the top. The script runs on
the 1st node allocated to the job. The script runs a Python wrapper that
ultimately issues the following mpirun command:
/apps/local/test/
Thanks for the reply.
no.
In my case the problem was with the misunderstanding of our network
administrator.
Our network system should have, up to 1023 ports locked but some one else
put a ticket on 1024 too.
for this purpose i wasn't able to communicate with other computers.
On Mon, Apr 7, 2
I was out on vacation / fully disconnected last week, and am just getting to
all the backlog now...
Are you saying that port 1024 was locked as well -- i.e., that we should set
the minimum to 1025?
On Mar 31, 2014, at 4:32 AM, Hamid Saeed wrote:
> Yes Jeff,
> You were right. The default valu
Thanks a lot for your help, It worked finally.
On Mon, Apr 7, 2014 at 6:05 PM, Ralph Castain wrote:
> Deleting the install as you describe is a VERY bad idea. As I explained
> elsewhere, the system generally comes with an installation. Blowing things
> away can destabilize other areas of the sy
Deleting the install as you describe is a VERY bad idea. As I explained
elsewhere, the system generally comes with an installation. Blowing things away
can destabilize other areas of the system unless you are (a) very careful, and
(b) very lucky
Just stay away from the system directories, pleas
make -j
It is very simple to uninstall
go to the
/usr/local/
here you will find
lib,bin etc these are the file of MPI.
just type
rm -r
and also next time when you want to install i will recommend you use
./configure --prefix=/usr/local/mpi_installation
make -j2
make install
include the follo
Nope - make uninstall will not clean everything out, which is one reason we
don't recommend putting things in a system directory
On Apr 6, 2014, at 8:44 AM, Kamal wrote:
> Hi Hamid,
>
> So I can uninstall just by typing
>
> ' make uninstall ' right ?
>
> what does ' make -j2 ' do ?
>
> T
Hi Ralph,
I use OMPI 1.8 for Macbook OS X mavericks.
As you said I will create a new directory to install my MPI files.
Thanks for your reply,
Kamal.
On 07/04/2014 17:37, Ralph Castain wrote:
What version of OMPI are you attempting to install?
Also, using /usr/local as your prefix is a VERY
Looks like bit-rot has struck the sequential mapper support - I'll revive it
for 1.8.1
On Apr 6, 2014, at 7:17 PM, Chen Bill wrote:
> Hi ,
>
> I just tried the openmpi 1.8, but I found the feature --mca rmaps seq doesn't
> work.
>
> for example,
>
> >mpirun -np 4 -hostfile hostsfle --mca rm
Hi Hamid,
So I can uninstall just by typing
' make uninstall ' right ?
what does ' make -j2 ' do ?
Thanks for your reply,
Kamal
On 07/04/2014 17:38, Hamid Saeed wrote:
Hello,
I also had a same problem.
but when i re-installed MPI using
./configure --prefix=/usr/local/
make
-j2
make instal
There is no async progress in Open MPI at this time so this is the
expected behavior. We plan to fix this for the 1.9 release series.
-Nathan Hjelm
HPC-5, LANL
On Mon, Apr 07, 2014 at 11:12:06AM +0800, Zehan Cui wrote:
> Hi Matthieu,
>
> Thanks for your suggestion. I tried MPI_Waitall(), but the
Hello,
I also had a same problem.
but when i re-installed MPI using
./configure --prefix=/usr/local/
make
-j2
make install
it worked.
On Sun, Apr 6, 2014 at 5:30 PM, Kamal wrote:
> Hello Open MPI,
>
> I installed open-mpi with
> ./configure --prefix=/usr/local/
> make all
> make ins
What version of OMPI are you attempting to install?
Also, using /usr/local as your prefix is a VERY, VERY BAD idea. Most OS
distributions come with a (typically old) version of OMPI installed in the
system area. Overlaying that with another version can easily lead to the errors
you show.
You s
Hello Open MPI,
I installed open-mpi with
./configure --prefix=/usr/local/
make all
make install
then I launched my sample code which gave me this error
my LD_LIBRARY_PATH=/usr/local/
I have attached the output file with this mail
could some one please help me with this .
Thanks,
Kamal
[BO
31 matches
Mail list logo