Re: [SIESTA-L] forcesToMaster: Etot or free energy?

2021-11-12 Por tôpico Karen Fidanyan

Hi Alberto,

thank you for the detailed clarification.
I don't know whether there's a universal convention which energy to pass 
to a socket. I know that FHI-Aims sends the free energy, and that i-PI 
is designed to be agnostic about the "force codes" it uses. I-PI uses 
DFT codes as well as forcefields, and, I think, it always assumes that 
the forces are consistent with the energy landscape. Though I cannot 
right now name any algorithm inside i-PI that would be broken completely 
because of such inconsistency, it makes life harder if one wants to 
analyze e.g. energy conservation along an NVE trajectory. I discussed 
this issue with the core developers of i-PI, and they prefer to receive 
free energy.


But, since the same socket protocol is used by ASE and, potentially, 
many other projects, and since it was E_tot for a long time, I see the 
point of having E_tot in some cases, so, introducing a flag in Siesta 
would be a compromise and backward-compatible solution. Say, 
Master.EnergyKind.


I can create an issue on Gitlab to discuss it further.

Best regards,
Karen Fidanyan
PhD student
Max Planck Institute for the Structure and Dynamics of Matter
Hamburg, Germany

On 11/10/21 9:45 AM, Alberto Garcia wrote:

Hi, Karen,

There are several facets in this issue:

* When a finite temperature (smearing) is used just for convergence 
acceleration, the free energy is a
computational artifact that is formally needed to restore the variational 
property. Tests for the size of the smearing and
for the fineness of the k-point grid should be carried out to monitor the 
convergence.

* A given client code might be interested, rather than in the free energy, in 
the zero-temperature (extrapolation to zero-smearing) E_tot, rather than in the 
free energy. Depending on the type of smearing, E_tot and E_free deviate from 
each other: the difference is quadratic in the smearing for Fermi-Dirac
smearing, but much smaller for Methfessel-Paxton or for the cold-smearing of 
Marzari and Vanderbilt. (See the article by Kresse and Furthmuller [1])

* There is only one slot for the energy in the i-PI protocol specification, and 
it is not clear what that energy should be.

Depending on your use case, you might want to use MP or cold smearing. There 
could also be an enhancement of the i-PI protocol to negotiate the kind of 
'energy' to be sent. Maybe an fdf flag in Siesta could be used to select what 
is sent through the interface. What do other codes send through the interface?

   Best regards,

   Alberto


[1] Kresse, G., and J. Furthmüller. 1996. “Efficiency of Ab-Initio Total Energy 
Calculations for Metals and Semiconductors Using a Plane-Wave Basis Set.” 
Computational Materials Science 6 (1): 15–50. 
https://doi.org/10.1016/0927-0256(96)8-0.


- El 2 de Noviembre de 2021, a las 22:22, Karen Fidanyan 
karen.fidan...@mpsd.mpg.de escribió:

| Dear Siesta users,
|
| I noticed that when communicating with a Master code, e.g. via i-PI
| socket, the values that would be sent to a socket are
| the forces and the _total_ energy, not the _free_ energy, even though I
| use Fermi-Dirac smearing.
| At the same time, in the manual I read: "We finally note that, in both
| cases (Fermi-Dirac and Methfessel-Paxton), once a finite temperature has
| been chosen, the relevant energy is not the Kohn-Sham energy, but the
| Free energy. In particular, the atomic forces are derivatives of the
| Free energy, not the KS energy."
| This means, to my understanding, that a client code receives an energy
| and forces that are inconsistent to each other, and the energy is
| somewhat ill-defined. Do you know why such choice was made? Is there any
| problem with sending the free energy?
|
| Many thanks,
| Karen Fidanyan
| PhD student
| Max Planck Institute for the Structure and Dynamics of Matter
| Hamburg, Germany
|
|
|
| --
| SIESTA is supported by the Spanish Research Agency (AEI) and by the European
| H2020 MaX Centre of Excellence (http://www.max-centre.eu/)


-- 
SIESTA is supported by the Spanish Research Agency (AEI) and by the European 
H2020 MaX Centre of Excellence (http://www.max-centre.eu/)


[SIESTA-L] forcesToMaster: Etot or free energy?

2021-11-03 Por tôpico Karen Fidanyan

Dear Siesta users,

I noticed that when communicating with a Master code, e.g. via i-PI 
socket, the values that would be sent to a socket are
the forces and the _total_ energy, not the _free_ energy, even though I 
use Fermi-Dirac smearing.
At the same time, in the manual I read: "We finally note that, in both 
cases (Fermi-Dirac and Methfessel-Paxton), once a finite temperature has 
been chosen, the relevant energy is not the Kohn-Sham energy, but the 
Free energy. In particular, the atomic forces are derivatives of the 
Free energy, not the KS energy."
This means, to my understanding, that a client code receives an energy 
and forces that are inconsistent to each other, and the energy is 
somewhat ill-defined. Do you know why such choice was made? Is there any 
problem with sending the free energy?


Many thanks,
Karen Fidanyan
PhD student
Max Planck Institute for the Structure and Dynamics of Matter
Hamburg, Germany


-- 
SIESTA is supported by the Spanish Research Agency (AEI) and by the European 
H2020 MaX Centre of Excellence (http://www.max-centre.eu/)


Re: [SIESTA-L] Running TranSIESTA for geometry relaxation

2021-10-29 Por tôpico Karen Fidanyan

Dear Nick,

if possible, I would appreciate further advice/discussion.

I try to do a single electrode setup (Nc=1). I discovered that if I 
don't put any layers /behind /the electrode and set `DM-init 
force-bulk`, I have a large error in total charge. It is somewhat 
inobvious - I would naively expect that the electrode is implicitly 
attached to its own repetitions in the semi-infinite direction. Related 
to that - I defined the Poisson solution as "ramp", hoping to use it for 
linear field later on (right now I do V=0 only) - could the problem be 
related to this? If "ramp" is not okay, what would be the right way to 
define a Poisson solution for Nc=1 with vacuum on the other side of the 
cell? It is unclear from the manual how one can produce a file for 
TS.Poisson  option.


Thank you!

Sincerely,
Karen Fidanyan
PhD student
Max Planck Institute for the Structure and Dynamics of Matter
Hamburg, Germany

On 10/21/21 10:11 PM, Nick Papior wrote:

Hi,

I haven't checked your output. But you generally need to be even more 
careful about the electrode + extended electrode region in the device. 
You really need to make sure the potential is screened towards the 
electrode + constraining some atoms close to the electrode.


Den tor. 21. okt. 2021 kl. 22.03 skrev Karen Fidanyan 
mailto:karen.fidan...@mpsd.mpg.de>>:


Dear Siesta users,

I try to run CG with TranSIESTA and face the following behavior:
1. The first geometry step, which consists of SIESTA and then
TranSIESTA, is fine.
2. But at the second step, which is TranSIESTA only, the total energy
goes to a high positive value and kind-of-converges to something
unreasonable (see siesta.out in the attachment). It happens even
though
I constrain the electrode and 2 crystal layers directly adjacent
to the
electrode.

Do you know how to interpret it and what is the trick to make
TranSIESTA
work with moving atoms?

    Many thanks,
Karen Fidanyan
PhD student
Max Planck Institute for the Structure and Dynamics of Matter
Hamburg, Germany


-- 
SIESTA is supported by the Spanish Research Agency (AEI) and by

the European H2020 MaX Centre of Excellence
(http://www.max-centre.eu/ <http://www.max-centre.eu/>)



--
Kind regards Nick


-- 
SIESTA is supported by the Spanish Research Agency (AEI) and by the European 
H2020 MaX Centre of Excellence (http://www.max-centre.eu/)


[SIESTA-L] Running TranSIESTA for geometry relaxation

2021-10-21 Por tôpico Karen Fidanyan

Dear Siesta users,

I try to run CG with TranSIESTA and face the following behavior:
1. The first geometry step, which consists of SIESTA and then 
TranSIESTA, is fine.
2. But at the second step, which is TranSIESTA only, the total energy 
goes to a high positive value and kind-of-converges to something 
unreasonable (see siesta.out in the attachment). It happens even though 
I constrain the electrode and 2 crystal layers directly adjacent to the 
electrode.


Do you know how to interpret it and what is the trick to make TranSIESTA 
work with moving atoms?


Many thanks,
Karen Fidanyan
PhD student
Max Planck Institute for the Structure and Dynamics of Matter
Hamburg, Germany



transiesta-cg.tgz
Description: application/compressed-tar

-- 
SIESTA is supported by the Spanish Research Agency (AEI) and by the European 
H2020 MaX Centre of Excellence (http://www.max-centre.eu/)


[SIESTA-L] Atomic coordinates affect the auxiliary supercell

2021-08-06 Por tôpico Karen Fidanyan

Dear Siesta users,

I have a question about how exactly periodicity works in Siesta and how 
the size of an auxiliary supercell is decided.


Why I ask: I want to run calculations with Siesta being driven by an 
external code (i-PI). I have two options how to treat periodicity:
a) I can wrap atoms to the unit cell inside the master code. This causes 
troubles with reusing the DM from a previous geometry if there were 
jumps of atoms to the opposite side of the cell. I'll write another 
email for this problem.
b) I can pass the coordinates as they are, which implies possibility 
that some molecules will diffuse far away from the origin.


In this email, I ask about (b) only.
I do a simple test and put one atom far away from the box
##
%block kgridMonkhorstPack
 2 0 0  0.0
 0 2 0  0.0
 0 0 1  0.0
%endblock kgridMonkhorstPack

LatticeConstant 1. Bohr
%block LatticeVectors
 10.   0.   0.
  0.  10.   0.
  0.   0.  10.
%endblock LatticeVectors

AtomicCoordinatesFormat Bohr
%block AtomicCoordinatesAndAtomicSpecies
   1.2    1.6    1.  2  O
   2.6    2.8    1.  1  H
   2.0   -0.9    1.  1  H
 105.   105.   105.  1  H
%endblock AtomicCoordinatesAndAtomicSpecies
##

Doing this, I get crazy auxiliary supercell:
**
superc: Internal auxiliary supercell:    23 x    23 x    23  = 12167
superc: Number of atoms, orbitals, and projectors:  48668 340676 523181
**
It's unclear how Siesta treats atoms outside the unit cell, the manual 
just states "/Notice that the atomic positions (shifted or not) need not 
be within the cell formed by LatticeVectors, since periodic boundary 
conditions are always assumed/". Could you please explain this and maybe 
point to literature or to the relevant pieces of the code?


Many thanks,
Karen Fidanyan
PhD student
Max Planck Institute for the Structure and Dynamics of Matter
Hamburg, Germany


-- 
SIESTA is supported by the Spanish Research Agency (AEI) and by the European 
H2020 MaX Centre of Excellence (http://www.max-centre.eu/)


[SIESTA-L] Compiling and running Siesta with openMPI-2.0.2, gfortran-6.3.0 (Debian 9)

2021-06-28 Por tôpico Karen Fidanyan

Dear Siesta users,

I'm having a hard time trying to run SIESTA on my Debian-9 laptop.
I have:

GNU Fortran (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
OpenMPI-2.0.2-2
libblas 3.7.0-2, liblapack 3.7.0-2
libscalapack-openmpi1 1.8.0-13

My arch.make is the following:
**
.SUFFIXES:
.SUFFIXES: .f .F .o .a .f90 .F90 .c

SIESTA_ARCH = gfortran_openMPI

FPP = $(FC) -E -P -x c
FC = mpifort
FC_SERIAL = gfortran
FFLAGS = -O0 -g -fbacktrace -fcheck=all #-Wall
FFLAGS_DEBUG = -g -O0

PP = gcc -E -P -C
CC = gcc
CFLAGS = -O0 -g -Wall

AR = ar
RANLIB = ranlib
SYS = nag

LDFLAGS = -static-libgcc -ldl

BLASLAPACK_LIBS = -llapack  -lblas \
    -lscalapack-openmpi -lblacs-openmpi 
-lblacsF77init-openmpi \

    -lblacsCinit-openmpi \
    -lpthread -lm

MPI_INTERFACE = libmpi_f90.a
MPI_INCLUDE   = .

FPPFLAGS_MPI = -DMPI -DMPI_TIMING -D_DIAG_WORK
FPPFLAGS = $(DEFS_PREFIX) -DFC_HAVE_FLUSH -DFC_HAVE_ABORT $(FPPFLAGS_MPI)

INCFLAGS = $(MPI_INCLUDE)

LIBS = $(BLASLAPACK_LIBS) $(MPI_LIBS)

atom.o: atom.F
    $(FC) -c $(FFLAGS_DEBUG) $(INCFLAGS) $(FPPFLAGS) 
$(FPPFLAGS_fixed_F) $<



.c.o:
    $(CC) -c $(CFLAGS) $(INCFLAGS) $(CPPFLAGS) $<
.F.o:
    $(FC) -c $(FFLAGS) $(INCFLAGS) $(FPPFLAGS) $(FPPFLAGS_fixed_F)  $<
.F90.o:
    $(FC) -c $(FFLAGS) $(INCFLAGS) $(FPPFLAGS) $(FPPFLAGS_free_F90) $<
.f.o:
    $(FC) -c $(FFLAGS) $(INCFLAGS) $(FCFLAGS_fixed_f)  $<
.f90.o:
    $(FC) -c $(FFLAGS) $(INCFLAGS) $(FCFLAGS_free_f90)  $<
**

The code compiles without errors.
If I run with Diag.ParallelOverK  True, I can run on multiple cores, no 
errors.
With Diag.ParallelOverK  False, I can run `mpirun -np 1` without errors, 
but if I try to use >=2 cores, it fails with:

**
Program received signal SIGSEGV: Segmentation fault - invalid memory 
reference.


Backtrace for this error:
#0  0x2ba6eb754d1d in ???
#1  0x2ba6eb753f7d in ???
#2  0x2ba6ec95405f in ???
#3  0x2ba70ec1cd8c in ???
#4  0x2ba6eab438a4 in ???
#5  0x2ba6eab44336 in ???
#6  0x563b3f1cfead in __m_diag_MOD_diag_c
    at /home/fidanyan/soft/siesta-4.1/Src/diag.F90:709
#7  0x563b3f1d2ef9 in cdiag_
    at /home/fidanyan/soft/siesta-4.1/Src/diag.F90:2253
#8  0x563b3ebc7c8d in diagk_
    at /home/fidanyan/soft/siesta-4.1/Src/diagk.F:195
#9  0x563b3eb9d714 in __m_diagon_MOD_diagon
    at /home/fidanyan/soft/siesta-4.1/Src/diagon.F:265
#10  0x563b3ed897cb in __m_compute_dm_MOD_compute_dm
    at /home/fidanyan/soft/siesta-4.1/Src/compute_dm.F:172
#11  0x563b3edbfaa5 in __m_siesta_forces_MOD_siesta_forces
    at /home/fidanyan/soft/siesta-4.1/Src/siesta_forces.F:315
#12  0x563b3f9a4005 in siesta
    at /home/fidanyan/soft/siesta-4.1/Src/siesta.F:73
#13  0x563b3f9a408a in main
    at /home/fidanyan/soft/siesta-4.1/Src/siesta.F:10
--
mpirun noticed that process rank 0 with PID 0 on node fenugreek exited 
on signal 11 (Segmentation fault).

**

I ran it by
`mpirun -np 2 ~/soft/siesta-4.1/Obj-debug-O0/siesta control.fdf | tee 
siesta.out`


The header of the broken calculation:
 


Siesta Version  : v4.1.5-1-g384057250
Architecture    : gfortran_openMPI
Compiler version: GNU Fortran (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
Compiler flags  : mpifort -O0 -g -fbacktrace -fcheck=all
PP flags    :  -DFC_HAVE_FLUSH -DFC_HAVE_ABORT -DMPI -DMPI_TIMING 
-D_DIAG_WORK
Libraries   :  -llapack -lblas -lscalapack-openmpi -lblacs-openmpi 
-lblacsF77init-openmpi -lblacsCinit-openmpi -lpthread -lm

PARALLEL version

* Running on 2 nodes in parallel
 



I also attach the fdf file and the full output with an error.
Do you have an idea what is wrong?

Sincerely,
Karen Fidanyan
PhD student
Max Planck Institute for the Structure and Dynamics of Matter
Hamburg, Germany

Siesta Version  : v4.1.5-1-g384057250
Architecture: gfortran_openMPI
Compiler version: GNU Fortran (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
Compiler flags  : mpifort -O0 -g -fbacktrace -fcheck=all
PP flags:  -DFC_HAVE_FLUSH -DFC_HAVE_ABORT -DMPI -DMPI_TIMING 
-D_DIAG_WORK
Libraries   :  -llapack -lblas -lscalapack-openmpi -lblacs-openmpi 
-lblacsF77init-openmpi -lblacsCinit-openmpi -lpthread -lm
PARALLEL version

* Running on 2 nodes in parallel
>> Start of run:  28-JUN-2021  18:42:45

   ***   
   *  WELCOME TO SIESTA  *   
   ***   

reinit: Readi