try xml.write = false if you use Intel compiler.

发自我的 iPad

> 在 2015年10月30日,23:50,Simona Achilli <[email protected]> 写道:
> 
> It happens using both 3.2 and trunk version. The program runs both on a local 
> cluster and on a supercomputing cluster.the last one has only 2 GB of memory 
> per node and 16 cpu per node and I have verified that it works using only 
> 2cpu per node,I.e.,with memory per process of about 8 GB (the same holds in 
> the local cluster)
> The code has been compiled by different teams in the two clusters so I'm 
> curious about the possible mistake in the compilation.
> Thanks for your support
> Simona
> 
> Il 30/ott/2015 16:11, "Nick Papior" <[email protected]> ha scritto:
>> siesta version?
>> are you on a cluster? If so, have you asked your local administrator for 
>> memory exhausts?
>> those outputs suggests relatively small systems so perhaps you have compiled 
>> the program erroneously?
>> 
>> 2015-10-30 16:00 GMT+01:00 Simona Achilli <[email protected]>:
>>> Dear SIESTA users,
>>> I’ve currently some problem to run SIESTA for metal surfaces because often 
>>> the program crashes with segmentation fault.
>>> It happens mainly with large systems (more than 400 atoms) but also with 
>>> cells containing few atomic layers (40 atoms). In general in these 
>>> calculations the number of orbitals is quite large.
>>> I was thinking it can be a problem related to memory requirement. In fact 
>>> I’ve verified that allowing an higher memory per MPI process SOMETIMES I 
>>> can overcome the problem.
>>> 
>>> Nevertheless, being the segmentation fault always at the same point, i.e. 
>>> after the initialization of the DM and the definition of the mesh, I was 
>>> wandering if it can be due to a different problem, may be related to the 
>>> mesh.
>>> In fact it seems quite strange that problem of memory occurs using only 40 
>>> atoms….
>>> I have attached below the last lines before the program stop, the first for 
>>> large simulation cell, the second for a smaller one.
>>> 
>>> If you have encountered the same problem and solved, please give me ad 
>>> advise.
>>> I’ve also tried to use the DirectPhi .true. parameter but never changed.
>>> 
>>> 
>>> Thank you in advance
>>> 
>>> Simona Achilli
>>> 
>>> ***************************************************************************************
>>> InitMesh: MESH =   240 x   240 x   360 =    20736000
>>> InitMesh: (bp) =   120 x   120 x   180 =     2592000
>>> InitMesh: Mesh cutoff (required, used) =   250.000   267.995 Ry
>>> ExtMesh (bp) on 0 =   192 x   132 x   158 =     4004352
>>> New grid distribution:   2
>>>            1       1:  120    1:   60    1:   32
>>>            2       1:  120    1:   60   33:  180
>>>            3       1:  120   61:  120    1:   32
>>>            4       1:  120   61:  120   33:  180
>>> New grid distribution:   3
>>>            1       1:  120    1:   60    1:   35
>>>            2       1:  120    1:   60   36:  180
>>>            3       1:  120   61:  120    1:   35
>>>            4       1:  120   61:  120   36:  180
>>> Setting up quadratic distribution...
>>> ExtMesh (bp) on 0 =   192 x   132 x   100 =     2534400  
>>> ***************************************************************************************
>>> 
>>> New_DM. Step:     1
>>> Initializing Density Matrix...
>>> 
>>> InitMesh: MESH =    50 x    50 x  1200 =     3000000
>>> InitMesh: Mesh cutoff (required, used) =   300.000   312.872 Ry
>>> 
>>> * Maximum dynamic memory allocated =   115 MB
>> 
>> 
>> 
>> -- 
>> Kind regards Nick

Responder a