Ray and Gus,
Thanks a lot for your help. I followed Gus' steps. I still have the same
problem for the compilation (I didnt check the libraries part though!). The
executables for quantum espresso work pretty fine. I have got them in
espresso-4.0.3/bin:dynmat.x iotk iotk_print_kinds.x iotk.x
Hi Ray, Elio, list
Hmmm ... somehow I didn't need to do "make all" in the QE top
directory before doing "make" in the EPW top directory.
Please, see the email that I just sent to the list.
I stopped in the QE configure step as per the recipe in the EPW site.
I presume the latter "make" takes
Hi Elio
For what it is worth, I followed the instructions on
the EPW web site, and the program compiled flawlessly.
Sorry, I don't know how to use/run it,
don't have the time to learn it, and won't even try.
**
1) Environment:
If your MPI/OpenMPI is not installed in a standard location,
you
Hi Elio and everyone,
I went to the EPW website and their instructions seem lacking with
respect to The Quantum-Expresso 4.0.3 requirement. The EPW folks want
to leverage Quantum Expresso intermediate object files. By knowing how
it builds and telling you where to put their package, they
I have already done all of the steps you mentioned. I have installed the older
version of quantum espresso, configured it and followed all the steps on the
EPW website when I got that error in the last steo; In fact I do have the
latest version of quantum espresso but since I work with electron
It is hard to tell why, but the object files (yes a2f.o, etc)
seem not to have been compiled from the corresponding source files
(a2f.f90 or similar).
In general the executable (your epw.x) is compiled only after all
the pre-requisite object files (the .o) and modules (the .mod)
have been
This was the first error yes. What do you mean other files are missing? Do you
mean the atom.o, basic_algebra_routines.o...? Well the f90 files present in
the src subdirectory start from a2f.90 , allocate_epwq.o,...and so on... I am
not also sure why there is that slash "\" just before the
Was the error that you listed the *first* error?
Apparently various object files are missing from the
../../Modules/ directory, and were not compiled,
suggesting something is amiss even before the
compilation of the executable (epw.x).
On 09/03/2014 05:20 PM, Elio Physics wrote:
Dear all,
I
Dear all,
I am really a beginner in Fortran and Linux. I was trying to compile a software
(EPW). Everything was going fine (or maybe this is what I think):
mpif90 -o epw.x ../../Modules/atom.o ../../Modules/basic_algebra_routines.o
../../Modules/cell_base.o ../../Modules/check_stop.o
Exiting with a non-zero status is considered as indicating a failure that needs
reporting.
On Sep 3, 2014, at 1:48 PM, Nico Schlömer wrote:
> Hi all,
>
> with OpenMPI 1.6.5 (default Debian/Ubuntu), I'm running the program
>
> ```
> #include
> #include
>
> int
Hi all,
with OpenMPI 1.6.5 (default Debian/Ubuntu), I'm running the program
```
#include
#include
int main( int argc, char* argv[] )
{
MPI_Init (, );
MPI_Finalize();
return EXIT_FAILURE;
}
```
that unconditionally returns an error flag. When executing this, I'm
somewhat surprised to get
Thanks Matt - that does indeed resolve the "how" question :-)
We'll talk internally about how best to resolve the issue. We could, of course,
add a flag to indicate "we are using a shellscript version of srun" so we know
to quote things, but it would mean another thing that the user would have
No, there are 12 cores per node, and 12 MPI processes are assigned to each
node. The total RAM usage is about 10% of available. We suspect that the
problem might be the combination of MPI message passing and disk I/O to the
master node, both of which are handled by Infiniband. But I do not know
On Tue, Sep 2, 2014 at 8:38 PM, Jeff Squyres (jsquyres)
wrote:
> Matt: Random thought -- is your "srun" a shell script, perchance? (it
> shouldn't be, but perhaps there's some kind of local override...?)
>
> Ralph's point on the call today is that it doesn't matter *how*
Jeff,
I tried your script and I saw:
(1027) $ /discover/nobackup/mathomp4/MPI/gcc_4.9.1-openmpi_1.8.2/bin/mpirun
-np 8 ./script.sh
(1028) $
Now, the very first time I ran it, I think I might have noticed a blip of
orted on the nodes, but it disappeared fast. When I re-run the same
command, it
Hi,
today I installed openmpi-1.9a1r32664 on my machines (Solaris
10 Sparc (tyr), Solaris 10 x86_64 (sunpc1), and openSUSE Linux 12.1
x86_64 (linpc1)) with Sun C 5.12 and gcc-4.9.0.
I get the following internal failure for my gcc.4.9.0-version on
Linux. I also have the other errors which I
Hi,
today I installed openmpi-1.9a1r32664 on my machines (Solaris
10 Sparc (tyr), Solaris 10 x86_64 (sunpc1), and openSUSE Linux 12.1
x86_64 (linpc1)) with Sun C 5.12 and gcc-4.9.0.
I get the following error for my Sun C version on Solaris Sparc.
tyr small_prog 129 ompi_info | grep MPI:
Hi,
today I installed openmpi-1.9a1r32664 on my machines (Solaris
10 Sparc (tyr), Solaris 10 x86_64 (sunpc1), and openSUSE Linux 12.1
x86_64 (linpc1)) with Sun C 5.12 and gcc-4.9.0.
I get the following error for my Sun C version on Solaris Sparc.
tyr small_prog 129 ompi_info | grep MPI:
Hi,
today I installed openmpi-1.9a1r32664 on my machines (Solaris
10 Sparc (tyr), Solaris 10 x86_64 (sunpc1), and openSUSE Linux 12.1
x86_64 (linpc1)) with Sun C 5.12 and gcc-4.9.0.
I get the following error for my Sun C version on Solaris x86_64.
sunpc1 small_prog 110 ompi_info | grep MPI:
Am 03.09.2014 um 13:11 schrieb Donato Pera:
> I get
>
> ompi_info | grep grid
> MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.6.5)
Good.
> and using this script
>
> #!/bin/bash
> #$ -S /bin/bash
> #$ -pe orte 64
> #$ -cwd
> #$ -o ./file.out
> #$ -e ./file.err
>
>
Hi,
today I installed openmpi-1.9a1r32664 on my machines (Solaris
10 Sparc (tyr), Solaris 10 x86_64 (sunpc1), and openSUSE Linux 12.1
x86_64 (linpc1)) with Sun C 5.12 and gcc-4.9.0.
I get the following internal failure for my Sun C version on
Linux.
linpc1 small_prog 112 ompi_info | grep MPI:
Hi,
I get
ompi_info | grep grid
MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.6.5)
and using this script
#!/bin/bash
#$ -S /bin/bash
#$ -pe orte 64
#$ -cwd
#$ -o ./file.out
#$ -e ./file.err
export LD_LIBRARY_PATH=/home/SWcbbc/openmpi-1.6.5/lib:$LD_LIBRARY_PATH
export
Hi,
Am 03.09.2014 um 12:17 schrieb Donato Pera:
> I'm using Rocks 5.4.3 with SGE 6.1 I installed
> a new version of openMPI 1.6.5 when I run
> a script using SGE+openMPI (1.6.5) in a single node
> I don't have any problems but when I try to use more nodes
> I get this error:
>
>
> A hostfile
Hi Ralph,
> I believe this was fixed in the trunk and is now scheduled to come
> across to 1.8.3
Today I installed openmpi-1.9a1r32664 and the problem still exists.
Is the backtrace helpful or do you need something else?
tyr java 111 ompi_info | grep MPI:
Open MPI: 1.9a1r32664
Hi,
today I installed openmpi-1.9a1r32664 on my machines with
Sun C 5.12 and gcc-4.9.0. Unfortunately shmem.jar isn't
available on my Solaris machines.
tyr java 112 mpijavac InitFinalizeMain.java
warning: [path] bad path element
"/usr/local/openmpi-1.9_64_cc/lib64/shmem.jar": no such file or
26 matches
Mail list logo