[Wien] spin moment in case.scf and case.scfdmup
Dear Prof. Peter Blaha, Thank you for your reply. I check the part charge in sphere in case.scf2up/dn and found the spin moment from d is about 1.9 muB which is the same with that in case.scfdmup. The case.scf would not be written in non-scf calculation, so can I conclude that after spin-orbit coupling, the spin moment changed? best regards, On Fri, Jan 6, 2012 at 12:33 AM, Peter Blaha pblaha at theochem.tuwien.ac.atwrote: Check case.scf2up/dn. From the differences in these files you can also get the spin moment (and also decomposed into s,p,d,.. from :QTLxxx) It should agree with lapwdm. Am 05.01.2012 15:53, schrieb Bin Shao: Dear all, I intend to calculate the orbital moment and MAE by using Force theorem. First I did a self-consistent calculation then did the non-scf calculation with different magnetization directions by using the commands as following. x lapwso -up -p -orb -c x lapw2 -up -p -so -c x lapw2 -dn -p -so -c x lapwdm -up -p -so -c (-orb for using GGA+U in the system) In this way, I obtain the orbital moment in the case.scfdmup file, for example --**--** --**--** :ORB001: ORBITAL MOMENT: -0.00187 0.00855 0.49744 PROJECTION ON M 0.49744 :SPI001: SPIN MOMENT: -0.2 0.8 1.90505 PROJECTION ON M 1.90505 --**--** --**--** however, I found there is a difference (about 0.5 muB) between the spin moment in case.scfdmup and case.scf produced by the scf calculation. --**--** --**--** :MMI001: MAGNETIC MOMENT IN SPHERE 1=2.45641 --**--** --**--** I have searched the maillist and found the answer to the difference provided by Prof. Novak in LAPWDM you calculate the spin moment from selected electrons only (usually d or f), while moment in the sphere is the sum from all electrons: you can check it by running LAPWDM for all s,p, d and f states. But I think the spin moment from s, p states would not larger than 0.5 muB. So how the difference comes from? spin orbit coupling? Any suggestion will be appreciated, thank you in advanced. Best, -- Bin Shao, Ph.D. Candidate College of Information Technical Science, Nankai University 94 Weijin Rd. Nankai Dist. Tianjin 300071, China Email: bshao at mail.nankai.edu.cn mailto:bshao at mail.nankai.edu.**cnbshao at mail.nankai.edu.cn __**_ Wien mailing list Wien at zeus.theochem.tuwien.ac.**at Wien at zeus.theochem.tuwien.ac.at http://zeus.theochem.tuwien.**ac.at/mailman/listinfo/wienhttp://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien -- P.Blaha --**--** -- Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna Phone: +43-1-58801-165300 FAX: +43-1-58801-165982 Email: blaha at theochem.tuwien.ac.atWWW: http://info.tuwien.ac.at/** theochem/ http://info.tuwien.ac.at/theochem/ --**--** -- __**_ Wien mailing list Wien at zeus.theochem.tuwien.ac.**at Wien at zeus.theochem.tuwien.ac.at http://zeus.theochem.tuwien.**ac.at/mailman/listinfo/wienhttp://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien -- Bin Shao, Ph.D. Candidate College of Information Technical Science, Nankai University 94 Weijin Rd. Nankai Dist. Tianjin 300071, China Email: bshao at mail.nankai.edu.cn -- next part -- An HTML attachment was scrubbed... URL: http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20120106/43a9624f/attachment.htm
[Wien] PBS
Dear Laurence, your last lines are exactly what we need ! Thank you for this. set remote = /bin/csh $WIENROOT/pbsh $WIENROOT/pbsh is just mpirun -x LD_LIBRARY_PATH -x PATH -np 1 --host $1 /bin/csh -c $2 I will try but I pretty sure that it will work fine. Regards Florent Le 05/01/2012 20:16, Laurence Marks a ?crit : I gave a slightly jetlagged response -- for certain WIEN2k style works fine with all queuing systems. But...it may not fit how the queuing system has been designed and admins may not be accomodating. My understanding (second hand) is that torque is designed to work well with openmpi for accounting, and by default knows nothing about tasks created by ssh. When the users time has elapsed it will terminate those tasks it knows about (the main one plus anything using mpirun) and ignore anything else. Hence for clusters where killing a ssh on node A does not propogate a kill to children on node B (which depends upon the ssh) one is left with processes that can run forever. There is something called an epilog script which maybe can do this, but it would need WIEN2k to create one every time it launches a set of tasks. Possible, but not trivial. Note: this is not just a WIEN2k problem. One of the admin's at NU large cluster is a friend and he tells me that every now an then he goes around and tries to clean up tasks left running like this on nodes from all sorts of software. Sometimes he has to reboot nodes since if torque believes there is nothing running on a node it will merrily create more tasks on it which can lead to heavy oversubscription and hang the node. And...just to make life more fun, torque knows nothing about MKL threading so on an 8-core node can easily start 8 different non-mpi jobs and if they all want 8 threads... Probably too long a response. Below is the parallel_options file that I use on a system with moab (similar, perhaps worse than pbs) where I try and be a gentleman and set the mkl threading as well as use miprun to launch tasks. setenv USE_REMOTE 1 setenv MPI_REMOTE 0 setenv WIEN_GRANULARITY 1 setenv WIEN_MPIRUN mpirun -x LD_LIBRARY_PATH -x PATH -np _NP_ -machinefile _HOSTS_ _EXEC_ set a=`grep -e 1: .machines | grep -v lapw0 | head -1 | cut -f 3 -d: | cut -c 1-2` setenv MKL_NUM_THREADS $a setenv OMP_NUM_THREADS $a setenv MKL_DYNAMIC FALSE if (-e local_options ) source local_options set remote = /bin/csh $WIENROOT/pbsh set delay = 0.25 $WIENROOT/pbsh is just mpirun -x LD_LIBRARY_PATH -x PATH -np 1 --host $1 /bin/csh -c $2 With this at least I don't create problems (hopefully). On Thu, Jan 5, 2012 at 7:19 AM, Peter Blaha pblaha at theochem.tuwien.ac.at wrote: It is NOT true that queuing systems cannot do the WIEN2k style. We have two big clusters and run on them all three types of jobs, i) only ssh (k-parallel), ii) only mpi-parallel (no mpi) and also of mixed type. And of course the administrators configured the sun grid engine so that it makes sure that there are no processes running when a job finishes and eventually kill all processes of a batch job on all the assigned nodes after it has finished. It's just a matter if the system programmers are willing (or able ??) to reconfigure the queuing system... PS: If you are running mpi-parallel usesetenv MPI_REMOTE 0 in $WIENROOT/parallel_options and ssh will not be used anyway. Am 05.01.2012 13:17, schrieb Laurence Marks: As Florent said, this is a known issue with some (not all) versions ofssh, and it is also a torque bug. What you have to do is use mpiruninstead of ssh to launch jobs which I think you can do by setting theMPI_REMOTE/USE_REMOTE switches. I think I posted how to do this sometime ago, so please search the mailing list. (I am in China and canprovide more information next week when I return if this is notenough, which it probably is not.) N.B., in case anyone wonders with torque (PBS) you are not supposedto use ssh to communicate the way Wien2k does. They are not going tomove on this so this is WIen2k's fault. I've looked in to this quitea bit and there is no solution except to avoid ssh (or live withzombie processes). Indeed, torque has the weakness of leavingprocesses around if a code does anything more adventurous than justrun a single mpirun -- so it goes. On Thu, Jan 5, 2012 at 3:22 AM, Peter Blahapblaha at theochem.tuwien.ac.at wrote:I've never done this myself, but as far as I know one can define aprolog script in all those queuing systems and this prolog script should ssh to all assigned nodes and kill all remaining jobs of this user.Am 05.01.2012 10:17, schrieb Florent Boucher:Dear Yundi, this is a known limitation of ssh and rsh that does not pass the interruptsignal to the remote host.Under LSF I had in the past a solution. It was a specific rshlsf for doingthis.Actually I use either SGE or PBS on two different cluster and the problemexists.
[Wien] I still have problem with wienk in parallel mode
Dear fellows, thanks for the answers. 2012/1/2 Peter Blaha pblaha at theochem.tuwien.ac.at model name : Intel(R) Xeon(R) CPU X3430 @ 2.40GHz stepping: 5 cpu MHz : 1197.000 Are you running at half speed ??? At least on my machines it would indicate the expected cpu MHz of 2400 the machine was idle when I got this information. If I repeat with lapw1 submited the speed goes to double.In principle this message should give you some clues. The mpirun command you listed is incomplete and wrong. You said you have: setenv WIEN_MPIRUN mpirun -v -np _NP_ -machinefile _HOSTS_ _EXEC_ I think the -v is wrong ?? It is correct. it means verbose mode as you can see here --- [nilton at bodesking case]$ mpirun -v -np 4 -machinefile .machines /home/nilton/wien2k/lapw1c_mpi lapw1.def running /home/nilton/wien2k/lapw1c_mpi on 4 LINUX ch_p4 processors Created /home/nilton/pesquisa/dftCalc/calWien/gaxtl1-xas/075/case/case/PI13830 - I am using .machines because .machine1 is empty. I tried this command and it works very well, as you can see in the output of top command lapw1c_mpi runnig in bodeking [nilton at bodesking ~]$ top top - 17:41:40 up 2:31, 6 users, load average: 0.42, 0.99, 1.91 Tasks: 201 total, 2 running, 199 sleeping, 0 stopped, 0 zombie Cpu(s): 8.7%us, 1.1%sy, 0.0%ni, 85.8%id, 0.0%wa, 0.1%hi, 4.4%si, 0.0%st Mem: 12250640k total, 1639948k used, 10610692k free, 137816k buffers Swap: 8193140k total,0k used, 8193140k free, 879624k cached PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND 14711 nilton16 0 47540 22m 3220 R 35.2 0.2 0:13.03 lapw1c_mpi --- ---lapw1c_mpi running in comput-0-0 [nilton at compute-0-1 ~]$ top top - 17:42:38 up 4 days, 3:34, 2 users, load average: 0.41, 0.91, 2.08 Tasks: 115 total, 2 running, 113 sleeping, 0 stopped, 0 zombie Cpu(s): 6.2%us, 0.7%sy, 0.0%ni, 88.2%id, 0.0%wa, 0.6%hi, 4.3%si, 0.0%st Mem: 6058240k total, 3244748k used, 2813492k free, 207132k buffers Swap: 1020116k total,0k used, 1020116k free, 2881356k cached PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND 17044 nilton16 0 65288 34m 3204 R 29.3 0.6 0:30.33 lapw1c_mpi and by the way the time program is instaled as you can see below [nilton at bodesking case]$ time real0m0.000s user0m0.000s sys 0m0.000s Nilton -- Nilton S. Dantas Universidade Estadual de Feira de Santana Departamento de Ci?ncias Exatas ?rea de Inform?tica Av. Transnordestina, S/N, Bairro Novo Horizonte CEP 44036900 - Feira de Santana, Bahia, Brasil Tel./Fax +55 75 31618086 http://www2.ecomp.uefs.br/ http://www.uefs.br/portal -- next part -- An HTML attachment was scrubbed... URL: http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20120106/ba796d5e/attachment.htm
[Wien] LAPW Error!
This error might be caused by a case.struct file format problem similar to: http://zeus.theochem.tuwien.ac.at/pipermail/wien/2011-February/014301.html or a user setup/input problem similar to: http://zeus.theochem.tuwien.ac.at/pipermail/wien/2010-November/013994.html On 1/6/2012 1:30 PM, Ant?nio Vanbderlei dos Santos wrote: Dear Users I have a problem, if someone can help me. forrtl: severe (24): end-of-file during read, unit 5, file /home/vandao/WIEN2k/filmefeo/filmefeo.in1 Image PCRoutineLineSource lapw1 0859F88D Unknown Unknown Unknown lapw1 0859EE05 Unknown Unknown Unknown lapw1 0855C848 Unknown Unknown Unknown lapw1 085265EA Unknown Unknown Unknown lapw1 08525F0B Unknown Unknown Unknown lapw1 0854396A Unknown Unknown Unknown lapw1 08064A84 inilpw_ 370 inilpw.f lapw1 08066FDF MAIN__ 42 lapw1_tmp_.F lapw1 080482A1 Unknown Unknown Unknown lapw1 085AAAD0 Unknown Unknown Unknown lapw1 08048161 Unknown Unknown Unknown stop error ___ Wien mailing list Wien at zeus.theochem.tuwien.ac.at http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien