$ldd pw.x.
What prints out?
It seems that /usr/lib/libblacsCinit-openmpi.so.1 get into your binary at a
certain point. But it should not.
This is the blacs from your OS used by scalapack but it is supposed to be
the Intel one.
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Did you build the code and run it on different machines? Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2018-08-07 19:18 GMT-05:00 Aziz Fall :
> ok so regardless of whether I run pw.x with mpirun or not I get the same
> error saying pw.x:
Everything seems good. I'm wondering if it is the problem of your mpirun.
Could you run pw.x directly without mpirun?
If you type "which mpirun" on the machine your are running, is it from the
intel parallel studio folder since you said you are using Intel MPI.
Ye
===========
Ye
There was a typo mpiifort instead of mpif90 in your case.
./configure MPIF90=mpiifort CC=icc --with-scalapack=intel
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2018-08-07 20:36 GMT-05:00 Ye Luo :
> Why don't you trust the configure? The config
. The parallel
compilation is well maintained at least for pw.
I never have issue with make -j 16 pw. For other target, you can try
parallel compilation but may have some dependency not well maintained.
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2018-08-07
Your mpi wrapper mpif90 is using gfortran underneath from the ldd pw.x
output.
Could you add MPIF90=mpiifort in your configure line?
But I'm not sure if this is the real problem.
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2018-08-07 19:24 GMT
/usr/lib/libblacsCinit-openmpi.so.1 seems belong to libblacs-openmpi1
package in ubuntu.
This indicates you have openmpi and blacs installed from the package
manager.
Are you sure your parallel studio is the cluster edition and you have intel
mpi?
Did you source psxevars.sh form parallel studio?
.
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2018-08-07 21:19 GMT-05:00 Will DeBenedetti :
> Maybe take this offline?
>
> Will DeBenedetti
> Cornell University
>
> Sent from my iPhone
>
> On Aug 7, 2018, at 22:11, Aziz Fall wrote:
&
That seems for openmpi.
Did you put --with-scalapack=intel when you run configure?
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2018-08-07 16:42 GMT-05:00 Aziz Fall :
> Dear Quantum espresso team,
>
> So recently I have been tryi
Hi QE website maintainers,
On the left side of http://www.quantum-espresso.org/forum.
The three mailing listed are not up-to-date.
Please mark them obsolete and add the new ones.
Best,
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
-19.2954* -19.2666 -19.1922 -19.1602
-2.9201 -2.8660 -2.4351 -2.3823 7.9845 8.0040 8.8546 8.8783
9.9004 9.9266 11.0131 11.0685 11.6606 11.6923 12.7962 12.8092
15.2910
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
TiO2
0.3050990 0.3050990 0.000
O 0.6949010 0.6949010 0.000
O 0.1949010 0.8050990 0.500
O 0.8050990 0.1949010 0.500
K_POINTS automatic
8 8 8 1 1 1
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
l.
second, I noticed that ph.x use quite a big amount of scratch space even
reduce_io is switched on.
I'd like to know if it is possible to have some feature similar to
disk_io='none' as pw.x.
Thank you so much!
Ye
===========
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National
.
Thank you so much.
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2016-04-25 15:09 GMT-05:00 Filippo SPIGA <filippo.sp...@quantum-espresso.org
>:
> Dear everybody,
>
> I am pleased to announce you that version 5.4.0 of Quantu
that q.
If you are confident with threaded pw.x, ph.x also gets benefit from
threaded MKL and FFT and the time to solution is further reduced.
For more details, you can look into PHonon/examples/Image_example.
P.S.
Your affiliation is missing.
===
Ye Luo, Ph.D.
Leadership Compu
.
If you still get disk issue, use less images and more threads.
Ciao,
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2016-05-05 8:42 GMT-05:00 Coiby Xu <coiby...@gmail.com>:
> Dear Dr. Luo,
>
> Thank you for your detailed reply!
>
>
these questions.
1) Any update for releasing DFT+U phonon? I saw the paper was out years ago.
2) Any ways to reduce the phonon disk I/O to 'none' similar to pw.x?
Reduce_io seems help very little during the calculation.
Best regards,
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
(__LINUX_ESSL)
to take the cdiagh_aix and it works well.
Ciao,
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2016-05-09 1:16 GMT-05:00 Paolo Giannozzi <p.gianno...@gmail.com>:
> Hi
>
> as far as I now, you should NOT use "-D__E
in kpoints and bands. however in the charge density h5, its
gvectors are still in a .dat file and the content of the h5 seems not human
readable. Do you have plans to improve them?
Thanks to every one. It's great to have a major release of QE.
Best regards,
Ye
=======
Ye Luo, Ph.D.
-density.hdf5 more readable.
I did h5ls charge-density.hdf5 and the contents are like in a machine
format. I would expect spin up/down with the coefficients dataset.
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2016-08-29 9:30 GMT-05:00 nicola
Hi Filippo,
It seems that this path is always included. Intel compiler (16 u3) doesn't
complain but GNU (5.4) does.
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2016-08-29 10:08 GMT-05:00 Filippo Spiga <filippo.sp...@quantum-espresso.
Thanks. I don't know what kind of compiler checking is doing in the
configure step.
Maybe a check can be added to prevent users using old compilers in this
case.
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2016-08-30 3:56 GMT-05:00 nicola varini
.
To use my resource more efficiently, I prefer the Grid way of computing by
breaking the whole calculation by q and then distribute irr among images.
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2016-09-02 10:40 GMT-05:00 Thomas Brumme
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2016-10-28 12:29 GMT-05:00 Vahid Askarpour <vh261...@dal.ca>:
> Dear QE Users,
>
> I am working on some modifications to the QE-6.0 code using symmetry. When
> I try to combine a 3-D array scattered a
Hi Vahid,
segfault in mp_sum doesn't necessarily mean a problem there. Probably you
wrote something to output array but not in a valid place before mp_sum.
Try to check your allocation of output and the copy make sure they are
correct.
Ye
===
Ye Luo, Ph.D.
Leadership Computing
your 3-D arrays input and output?
Are you allocating sufficient size for the output, did you deallocate it by
mistake? How large is output array size, if it exceeds 2^32, you might hit
the 32bit integer bug of the variable msglen.
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Is kunit=1 ?
Is your nkf and nkf_tot the same as nks nks_tot provided by QE environment?
Could you print the nbase for each processor and see if they are the
numbers expected?
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2016-10-30 18:59 GMT-05
the standard flag may help
reducing the maintenance pain of configure.
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum
the same issue.
I reported it last year.
http://qe-forge.org/pipermail/q-e-developers/2016-September/001355.html
Probably during the phase the BGQ is deployed in CINECA, that is maintained
but not recently.
Best,
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National
Not sure if it will help your issue but Intel 17 update 2 (2017.2.174) is
available.
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
2017-03-17 15:03 GMT-05:00 Ari P Seitsonen <ari.p.seitso...@iki.fi>:
>
> Dear Carlo,
>
> I
-19.2954* -19.2666 -19.1922 -19.1602
-2.9201 -2.8660 -2.4351 -2.3823 7.9845 8.0040 8.8546 8.8783
9.9004 9.9266 11.0131 11.0685 11.6606 11.6923 12.7962 12.8092
15.2910
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
TiO2
Hi QE website maintainers,
On the left side of http://www.quantum-espresso.org/forum.
The three mailing listed are not up-to-date.
Please mark them obsolete and add the new ones.
Best,
Ye
===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory
performance claim is mostly
hardware related not software.
Ye
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Laboratory
Christoph Wolf 于2019年3月1日周五 上午4:14写道:
> Dear all,
>
> please forgive this "beginner" question
Did you really want to use the same PP for both Fe and Cr?
ATOMIC_SPECIES
Fe 55.845 Fe.pbe-spn-kjpaw_psl.1.0.0.UPF
Cr 55.845 Fe.pbe-spn-kjpaw_psl.1.0.0.UPF
Cu 63.546 Cu.pbe-spn-kjpaw_psl.1.0.0.UPF
O 15.9994 O.pbe-n-kjpaw_psl.1.0.0.UPF
Ye
===
Ye Luo, Ph.D.
Computational
and a
main function and then use it to debug your compiler installation first.
This is not a QE problem and hopefully you can find answers on Google.
Best,k
Ye
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Laboratory
Kanka Ghosh 于2
in seconds?
In addition, you may try ELPA which usually gives better performance than
scalapack.
Thanks,
Ye
===========
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Laboratory
On Wed, May 27, 2020 at 9:27 AM Michal Krompiec
wrote:
> H
mpif90 from intel MPI invokes gfortran.
You should use mpiifort instead.
$ ./configure --with-scalapack=intel MPIF90=mpiifort
assuming mpiifort exists on your path.
Ye
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Labora
Make sure your hdf5 is compiled with the same compiler as what you are
using for QE and the compiled library has Fortran support enabled.
Ye
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Laboratory
On Thu, Nov 19, 2020 at
a
divisor, there is additional imbalance in the calculation.
So select npool as a divisor is a recommendation for getting better
performance instead of a requirement.
Ye
===========
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Laboratory
On Tue, Ja
What version of Intel compiler? If it is really old, I would like to stop
allowing it at CMake.
Ye
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Laboratory
On Sat, Jun 12, 2021 at 3:19 PM Paolo Giannozzi
wrote:
> It'
. Does your machine have 16 physical cores or 8 cores
16 hyperthreads?
3. To validate it is actually a compiler regression, run with 1 MPI rank
and compare the timing.
Ye
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Labora
not to run at max turbo frequency.
2. the node may be shared with others and there are other things running.
Probably you have the best knowledge of your machine.
If you really think OneAPI has regression, you may contact Intel support as
they should care about their product.
Ye
===
Ye Luo
This time OneAPI runs faster. The ifort in OneAPI should be very similar to
the one in previous parallel studio releases.
I think the performance difference is from your machine. Neither QE nor the
compiler plays anything here.
Ye
===
Ye Luo, Ph.D.
Computational Science Division
since you turned on OpenMP. Add libfftw3_omp.a
Ye
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Laboratory
On Thu, Mar 11, 2021 at 11:58 AM Chandan Kumar Choudhury <
ckch...@g.clemson.edu> wrote:
> Hi Pietro,
>
of QE features. If your
needed code path hits the zdotc issue, just report a bug.
Best,
Ye
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Laboratory
On Sat, Dec 18, 2021 at 11:22 PM Viejay Ordillo wrote:
> Dear Ye,
>
&
Could you try CMake?
https://gitlab.com/QEF/q-e/-/wikis/Developers/CMake-build-system
Ye
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Laboratory
On Sat, Dec 18, 2021 at 5:20 AM Viejay Ordillo wrote:
> Dear QE users,
Also make sure you have a clean source directory before you start. Module
files built previously may cause troubles.
Ye
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Laboratory
On Sat, Dec 18, 2021 at 11:10 AM Ye Luo w
but requires
files to be included like fftw3.f03. For this reason, both include path and
library path need to be properly set on the compile command line. Clearly
something is missing.
Best, Ye
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Arg
QEF/q-e/-/wikis/Developers/CMake-build-system
What we do with CMake is first locate the exact library file and specify
the full path on the link line.
Hopefully it gives you a chance of successful compilation.
Best,
Ye
=======
Ye Luo, Ph.D.
Computational Science Division & Leadership
LD_LIBRARY_PATH should only affect applications at runtime.
Any part of configure/makefile should not rely on it. Otherwise it is a
disaster.
I think QE configure doesn't depend on LD_LIBRARY_PATH and thus won't fix
anything.
Ye
===
Ye Luo, Ph.D.
Computational Science Division
There was some upgrade on the server. Then curl was not happy with http.
Just change http to https in test-suite/ENVIRONMENT.
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Laboratory
On Thu, Feb 3, 2022 at 4:31 PM Husak Mi
>
>
> export NETWORK_PSEUDO=
> http://www.quantum-espresso.org/wp-content/uploads/upf_files/
> to
> export NETWORK_PSEUDO=
> https://www.quantum-espresso.org/wp-content/uploads/upf_files/
> in the file
> ENVIRONMENT
>
Yes this is what I meant. You also need to delete all the bad ones left on
the
I only said edit ENVIRONMENT. The download script will add the
pseudopotential file name to NETWORK_PSEUDO and download the file.
https://www.quantum-espresso.org/wp-content/uploads/upf_files/ is not a
website.
Then `make run-tests-pw-parallel` will work.
Ye
===
Ye Luo, Ph.D
the input/output/log of run)
Ye
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Laboratory
On Thu, Feb 3, 2022 at 3:27 PM Husak Michal wrote:
> Hi
>
> I am trying to find why the MPI build (gfortan + OpenMP) does not
54 matches
Mail list logo