Hi Federico,

The behaviour you're seeing is normal. Based on the type of calculation, ARTS 
tries to pick the best place where parallelisation is applied. yCalc for 
example consists of several nested loops. Based on the number of iterations in 
each loop and the number of cores used, it picks one loop on which 
parallelisation is used. If you do a ybatch calculation, the loop over the 
batch cases is always the one that gets vectorised. This is done because 
parallelization introduces some kind of overhead that has less impact on outer 
loops than on inner loops. Since ARTS can't know how long each iteration will 
take, it is assumes that all iterations inside one batch job are equally 
computational expensive. If they're not, it can lead to to effects you're 
seeing. Currently, there is no mechanism available to control the 
parallelisation behaviour from a control file.

If you have a job that is computational expensive and runs for several days, it 
is advisable to split it up into smaller chunks. Not just for performance, but 
also to reduce the risk of losing several days worth of calculations if 
something goes wrong and the output is only written after the batch job has 
completed successfully.

Cheers,
Oliver


> On 17 May 2019, at 04:41, Federico Cutraro <fedecutr...@yahoo.com.ar> wrote:
> 
> Hello, I am using ARTS to obtain synthetic satellite images from WRF 
> forecast. To increase the speed, I use several cores (20) but I realized that 
> ARTS does not use all the cores during the whole execution. The number of 
> cores employed decrease with time and ended using only one core for a long 
> time. For example, in a run that took 3 weeks, the last week ran in one core. 
> There is a way to use always all the cores or this is normal behaviour?  
> 
> Kind regards,
> Federico


_______________________________________________
arts_users.mi mailing list
arts_users.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_users.mi

Reply via email to