Hi Christoph,

Here are my comments to your old email.

I see what you mean: calculations are done with matrices that only have the
> translational symmetries left. So what’s the point in solvers knowing
> anything about non-translational symmetries?



First of all, it makes sense to define a calculation as “calculate the band
> structure / Green’s function for this model” and not just “calculate the
> band energies for a given k”. To do the former efficiently, an
> understanding of non-rotational symmetries is needed. That understanding
> must be on the level of the solver, and the low-level format should provide
> everything that a solver needs to know.
> One could then argue: Why not only _declare_ the symmetry, without
> actually storing the system in a way that takes advantage of it? I think
> that a storage format that utilizes symmetry offers increased robustness.
> There is less potential for error when some important properties of a
> system are known to be true by construction. Just like a quasi-1-d lead in
> current Kwant (=InfiniteSystem) is always periodic as declared.



I agree, we can't get around solving Hamiltonians of the size of the
translational unit cell. The root reason for this is, for translationally
invariant systems we need to use k as a quantum number, and aside from a
measure zero part of the Brillouin-zone k is not invariant under a given
space group (SG) symmetry.

The gain comes from only having to solve for the fundamental domain of
k-space, which is often called "irreducible wedge", as it is usually a
wedge-like shape with its point at the gamma point.

>
> There’s one more aspect: when the Hamiltonian is invariant under a spatial
> symmetry, the bands have degeneracies at special high-symmetry values of k.
> While it’s true that these degeneracies occur only exactly for a subspace
> of the reciprocal space that has measure zero (that’s actually a
> re-phrasing of what you said above), it’s also true that the whole
> “topology” of the band structure gets modified by these degeneracies. We
> might be able, who knows, to somehow exploit their existence. Perhaps the
> Hamiltonian can be simplified in the vicinity of a special k point? Perhaps
> we can do some automatic analysis of the band structure? In any way, to do
> this it’s necessary that the full symmetry is present in a machine-readable
> way.


I also think it would be useful to automate symmetry analysis at high
symmetry k-points. This could result in some speedup in the
diagonalization, if we utilize the block-diagonal structure enforced by the
symmetries. While this may not be a significant gain in speed when
calculating the entire band-structure, these high symmetry points come up
quite often during analysis and would make sense to implement this at the
level of the solver rather than doing post-processing (like diagonalizing
degenerate bands with respect to some symmetry) at a higher level.

Let me now briefly sketch how I imagine that symmetries would work in
low-level systems.
>
>

Let’s consider a general space group G with a translational subgroup T. T
> is always Abelian and typically consists of infinitely many elements that
> can be indexed by a vector of integers, but it could be also the trivial
> group with a single element 1 (when there’s no translational symmetry).



We assume that T is a “normal subgroup” of G. That means that gT = Tg, i.e.
> left and right cosets of T are the same. T is normal for all space groups.
> (I cannot think of any useful application where T would be not normal – but
> then I cannot think of any application where G is not some space group.)



The cosets of T behave as a group, i.e. g_1 T g_2 T = g_1 g_2 T. That group
> (whose elements are the cosets of T) is denoted G/T – a factor group of G.



Each coset of T has infinitely many elements, but we can specify it by a
> single representative. (All the other members of the coset can be generated
> from the representative by applying pure translations to it.)



To describe the group in a very useful way, it is enough to store exactly
> one representative for each coset of T. The representatives are chosen in a
> unique way, such that they only contain translations by less than one
> primitive cell (and only in positive direction). Such sub-unit-cell
> translations are necessary to describe space groups that contain “screw
> axes” and “glide planes”.



Any element of the full group G can now be described uniquely by a single
> representative of a coset and a translation.


I fully agree with your analysis, this is the most transparent way of
thinking of space groups in my opinion.

>

This allows us to describe the full system by only specifying the
> Hamiltonian for a fundamental domain and the space group G. Each hopping is
> associated with an element of G. One subtle issue is that some bits of the
> Hamiltonian correspond to sites that lie on “Wyckoff positions”, that is
> special points that are kept invariant by some subgroup of G. That can be
> dealt with by specifying the subgroup under which that site is invariant.



>From the above information, one possible primitive cell (i.e. fundamental
> domain of translations) can be reconstructed by acting with each coset
> representative once on the fundamental domain.


I think the situation is actually even more complicated than this. As you
mention, sites can be at high symmetry Wyckoff positions, these are always
on the boundary of the fundamental domain. (To clarify my terminology, I
imagine the translational unit cell (and the entire space) being tiled by
fundamental domains. We choose one of these, and refer to it as "the"
fundamental domain, and the others as its images under symmetries.) Generic
points in the interior of the fundamental domain always have trivial
symmetry, any symmetry operation maps them to a different fundamental
domain. By definition the fundamental domain is a minimal volume that
covers space by successive action of all symmetry operations. If there was
a point invariant under some SG symmetry inside, it would have points in
its vicinity that are mapped onto each other, contradicting the assumption
that the volume is minimal.

For on-site terms, one needs to specify the on-site Hamiltonian for every
site in the fundamental domain.  For a site S it has to be invariant under
the point group of the site P_S, the subgroup of the space group G that
leaves the site invariant. This is equivalent to the subgroup of the full
point group P=G/T (group of coset representatives) that leaves S invariant
modulo lattice translations. Then one can act with the elements of the
quotient P/P_S to find all the unique images of S in the translational unit
cell and generate the corresponding on-site Hamiltonians.

There are also complications for hoppings, as bonds can also have
nontrivial symmetry. A bond that is fully inside a fundamental domain has
trivial symmetry, so all symmetry operations map it onto a different bond
in a different fundamental domain, no complication here. But a bond can be
invariant under symmetries in two ways. The symmetry either leaves both of
its ends invariant (e.g. a rotation along the bond), this only happens if
the sites it connects are both of nontrivial symmetry. Alternatively, the
symmetry might exchange the two sites (e.g. inversion on the bond center),
this can only happen if the sites are symmetry equivalent in different
fundamental domains. It seems relatively easy to enumerate these symmetries
and generate the point group of the bond B, P_B. Now, similarly to the
on-site term, these symmetries in P_B restrict the allowed hopping
Hamiltonian on the bond, and one needs to use elements of P/P_B to generate
equivalent bonds.

Considering all these complications, I'm not sure that storing hoppings in
the way you propose is feasible. As I understand, you would specify the
sites for the hopping by saying it connects site i in the fundamental
domain to site j in the image of the fundamental domain under some SG
symmetry g. I don't see why this representation would simplify calculations
and it's quite hard to keep track of it for humans, compared to just
specifying everything in the translational unit cell.

>

All the above stuff may sound quite complicated, but in practice it
> involves only simple algorithms and some (integer only!) linear algebra.
> (There’s no need to trouble ourselves with floating-point rotation
> matrices, so: no epsilons) Moreover, all the stuff neatly “factorizes” into
> the translational symmetries & the rest. In the trivial case “the rest” is
> simply the trivial group with a single element and we have only
> translational symmetries. As such, the added complexity of the
> non-translational symmetries can be completely isolated and only needs to
> be considered when needed.


I have some comments about this, in the email.

>

I wasn’t sure about that last bit until very recently. I believed that
> non-symmorphic space groups cannot be split up into the translational part
> and the rest. But in fact it’s possible, the only thing that one looses is
> that the coset representative do not form a subgroup of G. But that
> property is not needed for the unique reconstruction of the system from the
> fundamental domain. (If it’s ever useful for other things it can be
> extracted easily.)


Agree, this is one of the beauties of crystallographic group theory.

I address some more of your questions in the reply to my email.

Best,
Daniel

Reply via email to