Dear Brad,
I can only confirm what Paolo and Michal suggested.
Even with infiniband the efficiency of the FFT parallelization drastically
decreases at each new node, WHATEVER THE CODE (not only QE) or the librairy.
For SLURM jobs, if you ask 2 nodes of 16 cores, the first 16 are indexed 1 to
16
Dear Brad,
Fast communications means here Infiniband or other RDMA. Make sure your MPI
uses RDMA, I’ve seen systems where it isn’t enabled by default. That said,
if you use k-point parallelization you can get away with gigabit ethernet
as Paolo mentioned.
Best wishes,
Michal Krompiec
Merck KGaA
Paolo,
I believe the nodes I am using have gigabit connections. There are additional
nodes that have 10 or 25 gigabit connections but I don't think I would land on
one of them without specifically requesting them. What communication speed
would be appropriate for QE's needs?
I also did
Are there fast communications between the two nodes? if not, the parallel
distributed 3D FFT will be very slow (note the time taken by fft_scatt_yz).
You might find convenient to exploit k-point parallelization, that requires
much less communication: for instance, "mpirun -n 32 pw.x -nk 2" (2
I really appreciated your help, thanks a lot Dr. Tamas
Sent from Yahoo Mail for iPhone
On Thursday, November 5, 2020, 3:57 PM, Tamas Karpati
wrote:
Dear Omer,
Yes, i meant SO2 gas phase sim. This is an alternative to using the physisorbed
slab+SO2 complex as "reactant", R. Question of
Paolo,
Thank you for your suggestion. I will add recompiling to move to 6.6 to my to
do list. For now, I corrected the pseudopotential files as you indicated and
the calculation ran successfully. It has become slightly faster, but still
much slower than running on a single node (3:30s vs
On Thu, Nov 5, 2020 at 3:05 PM Baer, Bradly
wrote:
> *Pseudo file Ga.pbe-dn-kjpaw_psl.1.0.0.UPF has been fixed on the fly.*
> *To avoid this message in the future, permanently fix *
> * your pseudo files following these instructions: *
>
Tamas,
I will check the disk space, but this exact input file will work when running
on only one node. Then it crashes when running on 2 nodes. My suspicion is
that there is some issue with information being stored correctly across
multiple nodes that I don't understand.
-Brad
Dear Brad,
Your missing CRASH file remembers me when I made QE eat up all disk
space on the disk used to be used.
End of logs and a lot more were missing, only "df ." gave me the hint.
So the naive question raises: is it not your case, too?
t
On Thu, Nov 5, 2020 at 3:05 PM Baer, Bradly
wrote:
Good morning,
I am using QE6.5. I believe pw.x is crashing, but I cannot find the CRASH file
afterwards and there is not crash information in the .out file, it just stops.
I've pasted the output below.
Program PWSCF v.6.5 starts on 4Nov2020 at 16:25:44
This program is part of the
Hi Shitangshu,
Are you familiar with docker ? There is a QE-GPU 6.6a1 container on NGC that is
fairly well optimized and is super easy to run.
docker run --gpus all -it nvcr.io/hpc/quantum_espresso:v6.6a1
Regards,
Louis
From: users On Behalf Of Sitangshu
Bhattacharya
Sent: Thursday,
Dear Louis and Pierto,
Thanks for your effort in this... Infact, I tried with cuda 10.2 and with
the version offered by Peirto:
https://gitlab.com/QEF/q-e-gpu/-/archive/hotfix/q-e-gpu-hotfix.tar.bz2
$ ./install/configure CC=pgcc F77=pgf90 FC=pgf90 F90=pgf90 MPIF90=mpif90
This is a test message, sent to test the mailing list after a system
upgrade. You may ignore it, but maybe you shouldn't: there are frequent
problems with posts coming from some providers (notably, @ymail.com) that
bounce when delivered to some other providers (notably, @gmail.com). After
a few
Dear Omer,
Yes, i meant SO2 gas phase sim. This is an alternative to using the physisorbed
slab+SO2 complex as "reactant", R. Question of methodology and the nature of
materials. I cannot recall whether S+2O were together (as SO2) or decomposed in
your starting structure but in the second case
Dear Andrii,
I checked the total magnetization after each scf steps in the .out file,
and it is coming 0.0, as expected since the arrangement is
antiferromagnetic.
After observing again I found, out of 6 atoms of Fe1 [3d64s2],
corresponding to d stated I have 3 different sets of pdos. More
Thanks Duy Le for the suggestion. Sometimes one does not see the wood for the
trees...
Which leaves only one question:
Am I right, that for the SOC case the second component is missing if I use the
output in octave format?
Thomas
--
Dr. rer. nat. Thomas Brumme
Theoretical chemistry
TU
Hello,
I can't find a way to make ph.x 6.6 run correctly on my system.
It is a MAPbI3 orthorhombic cell containing 48 atoms (Pb, I, C, N, H), with
a 2x2x4 grid of k-points. ph.x correctly lists all representations (last
line printed is "Representation 144 1 modes - To be done"), it writes the
On Wed, Nov 4, 2020 at 11:28 PM Baer, Bradly
wrote:
> Now that I have two nodes, the script for a single node results in a crash
> shortly after reading in the pseudopotentials.
>
which version of QE are you using, and which crash do you obtain, with
which executable?
Paolo
--
Paolo
Dear Dr. Tamas
Sorry , what do you mean by “ you most probably need an SO2 simulation
(optimization+phonons)rather than the same for a surface attached SO2 or SO+O.
Big difference! “ do you mean i should do phonon for SO2 in gas phase ?
I do agree with you , 3-atoms phonon is non-physical, i
19 matches
Mail list logo