Re: [Wien] BoltzTrap2 with hf potential

2019-11-04 Thread Gavin Abo

The hf has not been implement yet for BoltzTraP2.

You would have to program your own python interface for it, refer to [1] 
on how a new DFT interface can be added.


Unless it has changed recently, BoltzTraP2 only accepts serial 
calculations for non-spin polarized or spin orbit WIEN2k calculation 
files.  It also might be possible to trick BoltzTraP2 into doing hf by 
forcing it through of those two other types of calculations.


Some additional details were given in the BoltzTraP2 google group at [2].

However, to view [2], you need to be a member by joining the group 
following the Users group section given at [3].


[1] https://gitlab.com/sousaw/BoltzTraP2/-/wiki_pages/siesta
[2] https://groups.google.com/d/msg/boltztrap/yL_B1rPr5Ec/ohFc_5s-EAAJ
[3] 
https://www.imc.tuwien.ac.at/forschungsbereich_theoretische_chemie/forschungsgruppen/prof_dr_gkh_madsen_theoretical_materials_chemistry/boltztrap2/


On 11/4/2019 10:22 AM, Peeyush kumar kamlesh wrote:

Sir,
I am using latest version of WIEN2k. I want to know that how can we 
run boltztrap2 by using hf potential?


Regards
Peeyush Kumar Kamlesh

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] wien2k installation: XSEDE

2019-11-04 Thread Gavin Abo
Of note, if the XSEDE in your subject line uses slurm as in the 
documentation here:


https://portal.xsede.org/documentation-overview#compenv-jobs

You likely need the SLURM_JOB_NODELIST variable as given in your slurm 
documentation.  For example, the slurm documentation (currently Version 
19.05) here:


https://slurm.schedmd.com/srun.html#lbAH

On 11/4/2019 8:24 PM, Gavin Abo wrote:


Comments:

Edison does look retired [1].

Based on the usage of hostname in Bushra's job file (below), it looks 
like that is configured for a shared memory super computer.


However, if the super computer is not a shared memory (single node) 
system but a distributed memory (multiple node) system [2], the use of 
hostname is potentially problematic.


That is because on a distributed memory system the head node typical 
is not a compute node [3].


One bad thing that can happen is that head node calculations can break 
the cluster login, for example [4]:


/Do NOT use the login nodes for work. If everyone does this, the login 
nodes will crash keeping 700+ HPC users from being able to login to 
the cluster.//

/

It depends on local policy, but most clusters I have seen have a 
policy that the system administrators can permanently take away a 
user's access to the cluster if a calculation is executed on the head 
node, for example [5]:


/CHTC staff reserve the right to kill any long-running or problematic 
processes on the head nodes and/or disable user accounts that violate 
this policy, and users may not be notified of account deactivation./


Instead of hostname, the job file usually needs to get a node list 
that it gets from the queuing system's job scheduler.  That could be a 
script like gen.machines [6] or Machines2W [7].  Or it could be 
environment variable, which name depends on the queuing system, for 
example the PBS_NODEFILE variable for PBS [8,9].


[1] 
https://www.nersc.gov/news-publications/nersc-news/nersc-center-news/2019/edison-supercomputer-to-retire-after-five-years-of-service/
[2] 
https://www.researchgate.net/figure/Shared-vs-Distributed-memory_fig3_323108484

[3] https://zhanglab.ccmb.med.umich.edu/docs/node9.html
[4] https://hpc.oit.uci.edu/running-jobs
[5] http://chtc.cs.wisc.edu/HPCuseguide.shtml
[6] https://docs.nersc.gov/applications/wien2k/
[7] SRC_mpiutil: http://susi.theochem.tuwien.ac.at/reg_user/unsupported/
[8] Script for "pbs": 
http://susi.theochem.tuwien.ac.at/reg_user/faq/pbs.html
[9] 
http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm


On 11/4/2019 6:37 AM, Dr. K. C. Bhamu wrote:

Dear Bushra,

I hope you are using the same cluster you are using before (NERSC: 
cori/edison).
From your job file it seems that you want to submit job on edison (28 
cores).
Please make sure that edison is still working. My available 
information says that edison has retired now. Please confirm from the 
system admin.
I would suggest you to submit job on cori. A job file is there on 
web-page of NERSC.


Anyway, please send the details as Prof. Peter has requested so that 
he can help you.



Regards
Bhamu

On Mon, Nov 4, 2019 at 1:14 PM Peter Blaha 
mailto:pbl...@theochem.tuwien.ac.at>> 
wrote:


What means:  " does not work" ??

We need details.

On 11/3/19 10:48 PM, BUSHRA SABIR wrote:
> Hi experts,
> I am working on super computer with WIEN2K/19.1 and using the
following
> job file, but this job file is not working for parallel run of
LAPW1.
> Need help to improve this job file.
> #!/bin/bash
> #SBATCH -N 1
> #SBATCH -p RM
> #SBATCH --ntasks-per-node 28
> #SBATCH -t 2:0:00
> # echo commands to stdout
> # set -x
> module load mpi
> module load intel
> export SCRATCH="./"
>
> #rm .machines
> #write .machines file
> echo '#' .machines
> # example for an MPI parallel lapw0
> #echo 'lapw0:'`hostname`'  :'$nproc >> .machines
> # k-point and mpi parallel lapw1/2
>
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .m

Re: [Wien] wien2k installation: XSEDE

2019-11-04 Thread Gavin Abo

Comments:

Edison does look retired [1].

Based on the usage of hostname in Bushra's job file (below), it looks 
like that is configured for a shared memory super computer.


However, if the super computer is not a shared memory (single node) 
system but a distributed memory (multiple node) system [2], the use of 
hostname is potentially problematic.


That is because on a distributed memory system the head node typical is 
not a compute node [3].


One bad thing that can happen is that head node calculations can break 
the cluster login, for example [4]:


/Do NOT use the login nodes for work. If everyone does this, the login 
nodes will crash keeping 700+ HPC users from being able to login to the 
cluster.//

/

It depends on local policy, but most clusters I have seen have a policy 
that the system administrators can permanently take away a user's access 
to the cluster if a calculation is executed on the head node, for 
example [5]:


/CHTC staff reserve the right to kill any long-running or problematic 
processes on the head nodes and/or disable user accounts that violate 
this policy, and users may not be notified of account deactivation./


Instead of hostname, the job file usually needs to get a node list that 
it gets from the queuing system's job scheduler.  That could be a script 
like gen.machines [6] or Machines2W [7].  Or it could be environment 
variable, which name depends on the queuing system, for example the 
PBS_NODEFILE variable for PBS [8,9].


[1] 
https://www.nersc.gov/news-publications/nersc-news/nersc-center-news/2019/edison-supercomputer-to-retire-after-five-years-of-service/
[2] 
https://www.researchgate.net/figure/Shared-vs-Distributed-memory_fig3_323108484

[3] https://zhanglab.ccmb.med.umich.edu/docs/node9.html
[4] https://hpc.oit.uci.edu/running-jobs
[5] http://chtc.cs.wisc.edu/HPCuseguide.shtml
[6] https://docs.nersc.gov/applications/wien2k/
[7] SRC_mpiutil: http://susi.theochem.tuwien.ac.at/reg_user/unsupported/
[8] Script for "pbs": 
http://susi.theochem.tuwien.ac.at/reg_user/faq/pbs.html
[9] 
http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm


On 11/4/2019 6:37 AM, Dr. K. C. Bhamu wrote:

Dear Bushra,

I hope you are using the same cluster you are using before (NERSC: 
cori/edison).
From your job file it seems that you want to submit job on edison (28 
cores).
Please make sure that edison is still working. My available 
information says that edison has retired now. Please confirm from the 
system admin.
I would suggest you to submit job on cori. A job file is there on 
web-page of NERSC.


Anyway, please send the details as Prof. Peter has requested so that 
he can help you.



Regards
Bhamu

On Mon, Nov 4, 2019 at 1:14 PM Peter Blaha 
mailto:pbl...@theochem.tuwien.ac.at>> 
wrote:


What means:  " does not work" ??

We need details.

On 11/3/19 10:48 PM, BUSHRA SABIR wrote:
> Hi experts,
> I am working on super computer with WIEN2K/19.1 and using the
following
> job file, but this job file is not working for parallel run of
LAPW1.
> Need help to improve this job file.
> #!/bin/bash
> #SBATCH -N 1
> #SBATCH -p RM
> #SBATCH --ntasks-per-node 28
> #SBATCH -t 2:0:00
> # echo commands to stdout
> # set -x
> module load mpi
> module load intel
> export SCRATCH="./"
>
> #rm .machines
> #write .machines file
> echo '#' .machines
> # example for an MPI parallel lapw0
> #echo 'lapw0:'`hostname`'  :'$nproc >> .machines
> # k-point and mpi parallel lapw1/2
>
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
> echo '1:'`hostname`':1' >> .machines
>
> echo 'granularity:1' >>.machines
> echo 'extrafine:1' >>.machines
> export SCRATCH=./
> runsp_lapw -p -ec 0.01 -cc 0.0001 -i 40 -fc 1.0
>
>
>   Bushra
>
>
>
  

Re: [Wien] Error bar and systematic and statistical error

2019-11-04 Thread Laurence Marks
A guess. Look at older literature where people have compared LDA, PBE and
other functionals for the band structure of well known materials (e.g.
silicon). Compare the difference to experiment for these. Then do the same
(e.g. LDA, PBE) for your system to get an idea. This is an experimental
approach (i.e., similar to what any good experimentalist does when
calibrating an instrument).

On Mon, Nov 4, 2019 at 9:53 AM mitra narimani 
wrote:

> Hello wien users
> I have a question about the error bar in the band structure of monolayers?
> How we can calculate the error bar in band structure?  How can we calculate
> the possible systematical/statistical error for the DFT simulations. My
> calculations are based on DFT by wien2k within only GGA and I don't have
> any experimental results to compare with. I referred to the article with
> the title of "Error estimates for density functional theory prediction of
> surface energy and work function" but I didn't obtain any result about the
> calculation procedure of band structure error bar. Could you help me please?
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__zeus.theochem.tuwien.ac.at_mailman_listinfo_wien&d=DwICAg&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=dm0Z3OXO_CcSQBlQverVpc2HBfIYTYxqIjCMalqUohM&s=jsXSLyhfBZrryl_78quNrleEeeYh0vAKiF5brsltuTs&e=
> SEARCH the MAILING-LIST at:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mail-2Darchive.com_wien-40zeus.theochem.tuwien.ac.at_index.html&d=DwICAg&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=dm0Z3OXO_CcSQBlQverVpc2HBfIYTYxqIjCMalqUohM&s=V4hGrQtQGhLeHWT8VPmGfhuIDHql-W-0JATUWuAaoKQ&e=
>


-- 
Professor Laurence Marks
Department of Materials Science and Engineering
Northwestern University
www.numis.northwestern.edu
Corrosion in 4D: www.numis.northwestern.edu/MURI
Co-Editor, Acta Cryst A
"Research is to see what everybody else has seen, and to think what nobody
else has thought"
Albert Szent-Gyorgi
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] BoltzTrap2 with hf potential

2019-11-04 Thread Peeyush kumar kamlesh
Sir,
I am using latest version of WIEN2k. I want to know that how can we run
boltztrap2 by using hf potential?

Regards
Peeyush Kumar Kamlesh
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] Error bar and systematic and statistical error

2019-11-04 Thread mitra narimani
Hello wien users
I have a question about the error bar in the band structure of monolayers?
How we can calculate the error bar in band structure?  How can we calculate
the possible systematical/statistical error for the DFT simulations. My
calculations are based on DFT by wien2k within only GGA and I don't have
any experimental results to compare with. I referred to the article with
the title of "Error estimates for density functional theory prediction of
surface energy and work function" but I didn't obtain any result about the
calculation procedure of band structure error bar. Could you help me please?
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] wien2k installation: XSEDE

2019-11-04 Thread Dr. K. C. Bhamu
Dear Bushra,

I hope you are using the same cluster you are using before (NERSC:
cori/edison).
>From your job file it seems that you want to submit job on edison (28
cores).
Please make sure that edison is still working. My available information
says that edison has retired now. Please confirm from the system admin.
I would suggest you to submit job on cori. A job file is there on web-page
of NERSC.

Anyway, please send the details as Prof. Peter has requested so that he can
help you.


Regards
Bhamu





On Mon, Nov 4, 2019 at 1:14 PM Peter Blaha 
wrote:

> What means:  " does not work" ??
>
> We need details.
>
> On 11/3/19 10:48 PM, BUSHRA SABIR wrote:
> > Hi experts,
> > I am working on super computer with WIEN2K/19.1 and using the following
> > job file, but this job file is not working for parallel run of LAPW1.
> > Need help to improve this job file.
> > #!/bin/bash
> > #SBATCH -N 1
> > #SBATCH -p RM
> > #SBATCH --ntasks-per-node 28
> > #SBATCH -t 2:0:00
> > # echo commands to stdout
> > # set -x
> > module load mpi
> > module load intel
> > export SCRATCH="./"
> >
> > #rm .machines
> > #write .machines file
> > echo '#' .machines
> > # example for an MPI parallel lapw0
> > #echo 'lapw0:'`hostname`'  :'$nproc >> .machines
> > # k-point and mpi parallel lapw1/2
> >
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> > echo '1:'`hostname`':1' >> .machines
> >
> > echo 'granularity:1' >>.machines
> > echo 'extrafine:1' >>.machines
> > export SCRATCH=./
> > runsp_lapw -p -ec 0.01 -cc 0.0001 -i 40 -fc 1.0
> >
> >
> >   Bushra
> >
> >
> > 
> >
> > ___
> > Wien mailing list
> > Wien@zeus.theochem.tuwien.ac.at
> > http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> > SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
> >
>
> --
>
>P.Blaha
> --
> Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
> Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
> Email: bl...@theochem.tuwien.ac.atWIEN2k: http://www.wien2k.at
> WWW:   http://www.imc.tuwien.ac.at/TC_Blaha
> --
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html