my two cents on this:
Physics in general is about modeling some situation with as few and as
simple assumptions, approximations and parameters as possible.
If the model works, that is if the model behaves similar to the real
thing, fine. As you observed: The confidence in the model increases,
people tend to conclude that the assumptions were correct and the
approximations are accurate. Neither of which is necessarily true since
the nice result might be pure coincidence, but as long as nothing points
in that direction this is usually considered good enough until something
actually goes wrong.
If the model description does not work you go looking for which
assumptions or what approximations are responsible for the trouble.
The advantage of DFT calculations is that they start from generally
accepted assumptions of quantum theory ('first principles') and
introduce relatively few assumptions and more or less controlled
approximations. This hopefully allowes you to pin down what goes wrong
in your model and you even might be able to fix it. At least it improves
the chances to do so. After that one can go back and look how earlier
calculations might have been affected or why the problem did not show up
The first principles are something like assigning operators to
observables you are interested in, define the states they act upon, and
write down some Hamiltionian corresponding to the internal energy
observable. DFT is then about finding the ground state of this
Hamiltonian in terms of a single electron density. Personally, I am not
happy with the term 'first principles' since working on the basis of
some valid first principles implies a lack of freedom to do something
wrong. However, one should be aware of the fact that things can go
sideways already at this stage. The selection of both, the states taken
into account and the Hamiltonian obviously may influence the outcome.
The Hamiltonian involves by necessity certain approximations. For
example, the spin-orbit interaction is treated in Wien2k only optionally
and then with additional approximations. Another prominent problem is
that one needs a single electron density Hamiltonian to keep the
computations (barely) manageable. While a single electron density
corresponding to the ground state of the true many particle Hamiltonian
is guaranteed to exist, the proof of its existance is not constructive.
To find it in the space of single electron density wave functions one
approximates the (two particle) exchange contributions by potentials
with acronyms like PBE, mBJ, ... I am no expert but I understand
improving these potentials is a major current research effort.
Even if the Hamiltonian is beyond doubt the result of a calculation can
be ambiguous. As you noted, the ground state determines only the
properties at 0 K. If excitations with different values for the
observables are within the range of the thermal energy this has to be
taken into account - usually with additional approximations and
assumptions involved depending on which properties one is interested in
(phonon package, BolzTrap for transport, Optic ...). It might be
difficult to even determine that ground state. Especially if additional
internal degrees of freedom like atomic positions or spins are important
a plethora of states representing local energy minima can appear with
very similar energies but very different macroscopic properties.
So in my opinion the foundation for believing that a DFT model
accurately represents some physical situation at 300 K would be that it
actually works in lot of cases. When it does not work one usually can
find fairly specific reasons for the failure (low lying excitations,
structural phase transitions ...) and improve things from there in a
Dr. Martin Pieper
Institute of Physics
Am 28.01.2016 05:16, schrieb Seongjae Cho:
As an engineering researcher with great lack in understanding the ab
I have basically believed that the first-principle calculation results
"ideal" values presumably obtained at "0 K" and they need to be
adjusted by proper mathematical
models formulated as a function of temperature for reachiing the more
practical values at non-0 K values.
However, in many pieces of literature, they are trying to compare the
ab initio calculation
results and the measurement results at non-0 K, particularly at room
I'm wondering what sort of foundation is required for believing that
the simulation results
can be treated as those obtained at 300 K. In other words, what models
or equations can be
adopted for taking the exact band structures and related parameters
(Eg, effective mass, etc.)
in hand in performing the first-principle simulations?
It will be appreciated if you fix my fault and share some wisdom. Many
- Sincerely, Seongjae.
Wien mailing list
SEARCH the MAILING-LIST at:
Wien mailing list
SEARCH the MAILING-LIST at: