If the jobs are all on the same localhost, then they should all be set up
with the same speed:

lapw0:localhost:4
localhost:4
localhost:4
granularity:1


On Tue, Oct 22, 2013 at 2:21 AM, <t...@theochem.tuwien.ac.at> wrote:

> Hi,
>
> I don't know what is the problem, but I can just say that
> in .machines there is no line specific for the HF module.
> If lapw1 and lapw2 are run in parallel, then this will be the same for hf.
>
> F. Tran
>
>
> On Tue, 22 Oct 2013, Martin Gmitra wrote:
>
>  Dear Wien2k users,
>>
>> We are running recent version of Wien2k v13.1 in k-point
>> parallelization. To perform
>> screened HF we believe that MPI parallelization would speed up our
>> calculations.
>> The calculations are intended for test reasons to be run on a local
>> multicore maschine.
>>
>> Our .machines file looks like:
>> lapw0:localhost:4
>> 1:localhost:4
>> 2:localhost:4
>> hf:localhost:4
>> granularity:1
>>
>> Invoking x lapw0 -p
>> starting parallel lapw0 at Tue Oct 22 09:15:48 CEST 2013
>> -------- .machine0 : 4 processors
>> LAPW0 END
>> LAPW0 END
>> LAPW0 END
>> LAPW0 END
>> 58.2u 0.6s 0:16.92 348.4% 0+0k 0+37528io 21pf+0w
>>
>> run lapw0 in parallel while
>> x lapw1 -up -c -p
>> starting parallel lapw1 at Tue Oct 22 09:18:30 CEST 2013
>> ->  starting parallel LAPW1 jobs at Tue Oct 22 09:18:30 CEST 2013
>> running LAPW1 in parallel mode (using .machines)
>> Granularity set to 1
>> Extrafine unset
>> @: Expression Syntax.
>> 0.0u 0.0s 0:00.10 10.0% 0+0k 0+64io 0pf+0w
>> error: command   /temp_local/CODES/WIEN2k_v13_**mpi/lapw1cpara -up -c
>> uplapw1.def   failed
>>
>> The parallel_options file looks like:
>> setenv TASKSET "no"
>> setenv USE_REMOTE 0
>> setenv MPI_REMOTE 0
>> setenv WIEN_GRANULARITY 1
>>
>> Before starting the tests we load all libs from intel compiler sets
>> WIENROOT and
>>    export TASKSET="no"
>>    export USE_REMOTE=0
>>    export MPI_REMOTE=0
>>    export WIEN_GRANULARITY=1
>>    export WIEN_MPIRUN="mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_"
>>
>> Do you have any idea while lapw1 does not start?
>> Many thanks in advance,
>>
>> Martin Gmitra
>> Uni Regensburg
>> ______________________________**_________________
>> Wien mailing list
>> w...@zeus.theochem.tuwien.ac.**at <Wien@zeus.theochem.tuwien.ac.at>
>> http://zeus.theochem.tuwien.**ac.at/mailman/listinfo/wien<http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien>
>> SEARCH the MAILING-LIST at:  http://www.mail-archive.com/**
>> w...@zeus.theochem.tuwien.ac.**at/index.html<http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html>
>>
>>  ______________________________**_________________
> Wien mailing list
> w...@zeus.theochem.tuwien.ac.**at <Wien@zeus.theochem.tuwien.ac.at>
> http://zeus.theochem.tuwien.**ac.at/mailman/listinfo/wien<http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien>
> SEARCH the MAILING-LIST at:  http://www.mail-archive.com/**
> w...@zeus.theochem.tuwien.ac.**at/index.html<http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html>
>
_______________________________________________
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

Reply via email to