Refer to [1], I don't know if your particular Intel 2018.5.274 compiler is causing that error, but you may want to try a different Intel compiler version or a different compiler (e.g., gfortran, or oneAPI [2,3]) if your able to do so.

[1] https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg17023.html [2] https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg20884.html [3] https://software.intel.com/content/www/us/en/develop/documentation/get-started-with-intel-oneapi-hpc-linux/top.html

On 3/26/2021 12:40 AM, Anupriya Nyayban wrote:
Dear Prof. Blaha,

Previously, I have followed the steps as:
deleted the case.struct file
copied the struct file for +10 as case.struct
x dstart
run_lapw -I -fc 10.0 -p
And, I have got the message as "forrtl: severe (67): input statement requires too much data, unit 10, file/case/./case.vector_1" at the first cycle.

Now, I have created a new case directory and saved the +10.struct as case.struct. Initialization has been done with RKmax = 7.0 and Kmesh = 150. The same message could be seen at the beginning when ""run_lapw -p -fc 10.0" has been executed.

Here, the struct file for +10 is attached below.



On Thu, 25 Mar 2021 at 12:34, Anupriya Nyayban <mamani...@gmail.com <mailto:mamani...@gmail.com>> wrote:

    Dear Prof. Blaha,


    Thank you very much for the help!!

    First, I have activated both min and run_lapw in optimize.job to
    find the energy of the relaxed one, and could realize the serious
    mistake now.

    Second, yes, the calculation crashes in the first cycle for +10.

    Third, I have run x dstart, run_lapw -I -fc 10.0 -p for +10 and
    found the following message at the first cycle:
     "forrtl: severe (67): input statement requires too much data,
    unit 10, file/case/./case.vector_1".

    May I find the volume optimization with a smaller RKmax value to
    avoid the large data error and later I can have scf with the
    optimized lattice parameters. converged RKmax and Kmesh?




    On Wed, 24 Mar 2021 at 17:42, Anupriya Nyayban
    <mamani...@gmail.com <mailto:mamani...@gmail.com>> wrote:

        Dear experts and users,

        In addition to the above information, I want to mention that
        commands used in optimize.job script are "min -I -j "run_lapw
        -I -fc 1.0 -i 40 -p"" and "run_lapw -p -ec 0.0001". The RKmax
        and kmesh are set to 7.0 and 150 respectively.  The energy
        versus volume graph (fitted to Murnaghan equation of state)
        looks very different from the usual. I am not getting any idea
        why lapw2 crashes (error in paralle lapw2 is shown in
        lapw2.error) for +10% of change in volume. I need your
        valuable suggestions to proceed with the calculation.




        On Fri, 19 Mar 2021 at 00:39, Anupriya Nyayban
        <mamani...@gmail.com <mailto:mamani...@gmail.com>> wrote:

            Dear experts and users,

            I was calculating the volume optimization in parallel
            (with 8 cores) of an orthorhombic 2*2*1 supercell having
            80 atoms (in the supercell) in a HPC (Processor: dual
            socket 18 core per socket intel skylake processor, RAM: 96
            GB ECC DDR4 2133 MHz RAM in balanced configuration,
            Operating system: CentOS-7.3, using compiler/intel
            2018.5.274). The changes in volume were set to -10, -5, 0,
            5, 10 (in %). I could find error only in lapw2.erro which
            states "error in parallel lapw2". The scf calculations
            have been completed for the volume changes of -10, -5, 0, 5%.

            Looking forward for your suggestion.
            If you need any additional information please let me know.

            Thank you in advance.

-- With regards
            Anupriya Nyayban
            Ph.D. Scholar
            Department of Physics
            NIT Silchar



-- With regards
        Anupriya Nyayban
        Ph.D. Scholar
        Department of Physics
        NIT Silchar



-- With regards
    Anupriya Nyayban
    Ph.D. Scholar
    Department of Physics
    NIT Silchar



--
With regards
Anupriya Nyayban
Ph.D. Scholar
Department of Physics
NIT Silchar
_______________________________________________
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

Reply via email to