[Wien] value of OMP_NUM_THREAD

2014-02-13 Thread shamik chakrabarti
Dear wien2k users, we have successfully installed wien2k 13. We are using a system having 16 cpu. I have set a value of OMP_NUM_THREADS=16 by editing .bash_profile as export OMP_NUM_THREADS=16 However, it still using only 1 cpu out of 16. So what could be the proper value of

[Wien] wien2wannier release 1.0-beta

2014-02-13 Thread Elias Assmann
Dear Wien2k users, A new version of wien2wannier (1.0-beta), the interface from Wien2k to Wannier90, is available at http://www.ifp.tuwien.ac.at/forschung/arbeitsgruppen/cms/software-download/wien2wannier/ The new version is tagged as a “beta” release until it has been more thoroughly

[Wien] Configuring SCRATCH variable for parallel computation

2014-02-13 Thread César de la Fuente
Hi, I 'm doing some tests in the Memento cluster of the University of Zaragoza on TiC system with a k- 100k pts , 4 nodes with 64 CPUs per node. It is a system that does not share RAM and hard disks between nodes during calculations. Initially the parallel computation with Wien2k stopped in the

Re: [Wien] Configuring SCRATCH variable for parallel computation

2014-02-13 Thread Michael Sluydts
Hello César, To perform parallel calculations you do need a shared directory between all nodes. As you have described '/home' appears to be a form of shared storage. What its intention is, is of course not well-known to us. If it is shared there is no direct reason it cannot function for

Re: [Wien] Configuring SCRATCH variable for parallel computation

2014-02-13 Thread Oleg Rubel
It is getting complicated when you do both MPI + k-point parallelization. In large calculations there is usually less k-points. Will it be possible to test MPI with the local scratch without k-point parallelization (i.e., k-point run sequentially)? This will help to mediate problems mentioned by