Re: [SIESTA-L] << avoid SIESTA terminal ouput >>

2019-02-06 Por tôpico I. Camps
Thanks Nick and Sushilfor the suggestions.

[]'s,

Camps


On Tue, Feb 5, 2019 at 7:05 PM Nick Papior  wrote:

> These messages are written to std-out and std-err.
>
> Simply do:
>
> siesta ... 2>/dev/null &
>
> If you want to suppress those messages from stderr while running in the
> background.
>
> Den man. 4. feb. 2019 kl. 22.04 skrev I. Camps :
>
>> Hi SIESTers,
>>
>> I would like to know how to avoid the output from SIESTA that is directed
>> to the terminal window.
>>
>> After sending a calculation to run in background, I am getting message
>> like these:
>>
>> Gamma-point calculation with multiply-connected orbital pairs
>> Folding of H and S implicitly performed
>>
>> I am using siesta_4.1-b4. In previous versions and with the same system I
>> did not get such messages.
>>
>> Regards,
>>
>> Camps
>>
>
>
> --
> Kind regards Nick
>


[SIESTA-L] Segmentation fault In tbtrans run

2019-02-06 Por tôpico Barnali Bhattacharya
Dear Siesta Users

I am using siesta-3.2 version ( intalled parallel using intel mkl
libraries and mpi libraries)for calculating I-V characteristics of
some nanowires.
The transiesta calculations finished successfully but when I am tring
to run the tbtrans calculations I get the following error.
Even I have tried to run serially but the same segmentation fault
occurs in case of tbtrans calculations.
 But the transiesta and tbtrans both are running for same input in an
local machine where  siesta-3.2 version is installed with gfortran.
 The error I am getting is --
.
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image  PCRoutineLine
Source
tbtrans0052C8A4  Unknown   Unknown  Unknown
tbtrans004AC5A8  Unknown   Unknown  Unknown
tbtrans004A9AC9  Unknown   Unknown  Unknown
tbtrans0046CA39  Unknown   Unknown  Unknown
tbtrans0043763B  Unknown   Unknown  Unknown
tbtrans004351BD  Unknown   Unknown  Unknown
tbtrans004597D5  Unknown   Unknown  Unknown
tbtrans0040496C  Unknown   Unknown  Unknown
libc.so.6  003FF541ECDD  Unknown   Unknown  Unknown
tbtrans00404869  Unknown   Unknown  Unknown
.

Could anyone please help me in this regard?
I am waiting for any suggestion.
Thanking in advance

Barnali Bhattacharya
SENIOR RESEARCH FELLOW
Department of Physics, Assam University
Silchar-788011, India




-- 
Barnali Bhattacharya
Ph. D student (CSIR SRF)
Department of Physics
Assam University, Silchar


[SIESTA-L] Structure got distorted after relaxation

2019-02-06 Por tôpico Bibhas Manna
Dear SIESTA users,

I am using SIESTA (4.1-b3) code for the relaxation of graphene-metal oxide
interface structure. It has been seen that after few 'vc-relaxation'
steps,  some atoms of the structure come outside the unit cell. Now, as I
am continuing with the further simulation steps, it seems to me that these
outside atoms may not get proper boundary/periodic conditions and results
in a wrong converged *distorted structure*.

Now I am bit confused as I don't know whether final relaxed structure is
correct or not.

I am very new to the SIESTA code. Could you please help me to clear my
doubt?

I am looking forward to hearing from you.

Thanking you.
With regards,
Bibhas


Re: [SIESTA-L] openmp

2019-02-06 Por tôpico Nick Papior
In the Siesta 4.1 manual there is a clear description (Sec. 2.3.2) on how
to utilize hybrid MPI+OpenMP using OpenMPI.

For your architecture/cluster it may be different, I would advice you
contact your local HPC admin in that case.



Den tir. 5. feb. 2019 kl. 22.03 skrev 郑仁慧 :

> Dear Nick:
>  Thank you very much for your reply. I can not understand your
> explanation well. Would you like to give me an example using openmp or mpi?
> Thank you again for your kind help.
> Sincerely
> Ren-hui Zheng
>
>
>
>
>
>
> From: Nick Papior 
> Date: 2019-02-04 15:07:10
> To: siesta-l 
> Subject: Re: [SIESTA-L] openmp
>
> Dear Ren-hui Zheng,
>
> Siesta can benefit from MPI, OpenMP or both.
> The benefit may be briefly summarized as follows:
> MPI has distribution across orbitals, hence a basic advice is that you get
> performance scaling with # of cores == # of atoms (rule of thumb)
> OpenMP also distributes across orbitals, but on a finer level. Here you
> *may* get better scaling with more threads than atoms, however, generally I
> would still suspect the best performance scaling up to # of cores == # of
> atoms (rule of thumb).
>
> For the hybrid (MPI + OpenMP) the same thing applies, # of MPI-procs TIMES
> # of threads per MPI-proc == # of atoms should give you reasonable scaling.
>
> Again, these are rule-of-thumbs as it also depends on your basis etc.
> Note that OpenMP may be difficult to get correct scaling since affinity
> plays a big role. How you place threads depends on your architecture and I
> would advice you to do some benchmarks on your architecture to figure out
> how to do best OpenMP.
>
> Den søn. 3. feb. 2019 kl. 22.01 skrev 郑仁慧 :
>
>> Dear all:
>>Can the siesta improves calculation speed when it optimizes
>> molecular structure using openmp or mpi. When I  optimize molecular
>> structure with openmp or mpi, the calculation speed cannot be improved. I
>> don not know the reason. Would someone like to explain it? Thank you very
>> much for your help in advance.
>>
>> Sincerely
>> Ren-hui Zheng
>>
>>
>
> --
> Kind regards Nick
>
>
>

-- 
Kind regards Nick