>
> You are correct that I have defined a symmetric tensor and added
> asymmetric data! However, I took this example almost verbatim from the code
> examples in the documentation on the website for TensExpr. I believe it
> would be good to update the examples under TensExpr so that the declared
> symmetry matches the data.
Yes, that is definitely an error.
Second - as a student approaching GR from a physics perspective, the need
> to declare a symmetry for the indices is very non-intuitive, especially
> because I have never studied permutations formally, and am therefore
> completely unfamiliar with the BSGS formalism. Would it be possible to
> create tensors which have only the identity permutation by default? That
> way, those of us who are relatively less concerned with canonicalization
> can use tensors without worrying about conflicts between the data we put in
> and the symbolic representation.
The symmetry requirement could be made optional by filling a default value.
Anyway, I would consider this to be low-priority, as there are more serious
issues.
BSGS is not a formalism of tensors, it's a generator of the permutation
group which is used by an algorithm employed to simplify expressions
according to their symmetries.
Consider this example:
In [43]: A = tensorhead('A', [Lorentz]*2, [[2]])
In [44]: B = tensorhead('B', [Lorentz]*2, [[1]*2])
In [45]: expr = A(i0, i1)*B(-i0, -i1)
In [46]: expr.canon_bp()
Out[46]: 0
It clearly recognizes that *expr* is zero (double contraction between
symmetric and antisymmetric tensors).
Third - I tried running my code with the symmetry definition you provided.
> The symbolic representation was correct, because I got
> >>> A_symm
>
> >>> A(i0, i1) + A(i1, i0)
>
> However, the data had the same problem as before, i.e.,
> >>> A_symm.data
> >>> array([[0, 2], [4, 6]], dtype=object)
>
> The _TensorDataLazyEvaluator does not attempt to permute the indices in
> the _get function when working with a TensAdd expression. Instead, it just
> adds the indices in alphabetical order. So if you attempt
>
> >>> T(i0, i1, i2) = B(i0, i1, i2) + C(i1, i0, i2) + D(i2, i0, i1)
> >>> T(i0, i1, i2).data
>
> The result will be incorrect, but more specifically, assuming i0 = 'i0',
> etc, the _get function will return you the data for
> B(i0, i1, i2) + C(i0, i1, i2) + D(i0, i1, i2)
>
> I actually know how to fix this (indeed, when hacking around on my own
> computer I did fix it) by checking the index positions and judicious use of
> the np.swapaxes function.
That is clearly a bug. The module *sympy.tensor.tensor* has undergone very
little testing, so it's likely to have various kinds of problems.
By the way, I am not very happy with the current way component data are
added to the tensor. The abstract index notation is meant to be basis-free,
and defining components on it does not make any sense. I believe there is a
need of a new kind of indices, say *BasisTensorIndex*, which would take a
*sympy.diffgeom.CoordSys* object as a parameter. Obviously *TensorIndexType*
should correspondingly be assigned a *sympy.diffgeom.Patch* instance. After
this, the *.data* assignment should be permissible only if the indices are
concrete symbols, that is, they have a coordinate system basis. One could
then read the component data in a new coordinate system by defining a new
*CoordSys* instance, connecting it to the other one as defined in
*sympy.diffgeom*, then creating other *"BasisTensorIndex" *referred to this
new coord system.
Such kind of indices would allow repeated indices, e.g. *A(-i, i, i)*, in
the abstract index notation this is not allowed. Obviously this requires
support to the canonicalization of repeated indices, or disabling
canonicalization for them.
Covariant and partial differentiation I have actually implemented, in a
> hacky sort of way, but I'm running into bugs in the multiplication that I
> haven't tracked down yet.
My idea is to create *CovarDerivative *and *PartialDerivative* objects. One
would use *CovarDerivative(i0, A(i1, -i0))*, noting that the covariant
derivative is OK if expressed with the abstract index notation, while the
partial derivative needs the coord-system based index notation.
The problem of this idea is that it is incompatible with the current
algorithm that detects the repeated indices.
I guess what I want to know is - Are you interested in taking the current
> sympy.tensor.tensor module and fixing these bugs? I am willing to spend
> some time implementing/fixing the problems with the
> _TensorDataLazyEvaluator class, I just don't know what is planned.
I would like to have a major rewrite of that module, but I don't have much
time now. One point is to make *TensAdd *and *TensMul* inherit *Add* and
*Mul*, and adapt the index contraction tools to handle unexpanded tensor
expressions, e.g. A(i) * ( B(-i, j) + C(j, -i) ) , and unexpanded
derivatives, e.g. PartialDerivative(i, A(-i, j) + B(-i, j)).
By the way, any contribution is welcome. Discussion about the source code
usually takes place on github.
With regards to the diffgeom module, I found that to be unintuitive and
> even less documented than tensor. It was not really clear to me how I was
> supposed to define arbitrary tensors (i guess build them up from basevector
> fields? I didn't look into it much), and it is very very nice to have the
> Einstein index notation.
diffgeom only supports fully covariant tensors by creating a TensorProduct
of one-forms, and adding those tensor products. For example, the matrix
[[1, 2], [3, 4]] would be represented as
TP(dx, dx) + 2*TP(dx, dy) + 3*TP(dy, dx) + 4*TP(dy, dy)
where dx, dy are the basis one-forms, obviously TP(dx, dy, dz) would be the
component of a rank-3 tensor. It's another way to deal with them, by
explicitly showing all components.
> Keep in mind that I am not any kind of GR theorist or Differential
> Geometer, I'm an experimentalist who just wants an easy way to convert
> equations into code. Therefore, if you can tell me what needs to be done,
> I'm willing to work on supporting Einstein Index Notation in
> sympy.tensor.tensor.
The main problem is to reason about the existing code. I tried to clean it
by separating some of the index algorithms from the tensor expression
classes, and about one year ago I put it in *TIDS*, a class you can find on
that file.
You could fix the bugs you reported if you like, that's easy. The best is
to discuss about code on github, which is better suited for that, feel free
to create PR or gists.
--
You received this message because you are subscribed to the Google Groups
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit
https://groups.google.com/d/msgid/sympy/e29c6281-bd74-4773-bd39-2771e37ba279%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.