Thank you for your reply. I think I have a few suggestions:
You are correct that I have defined a symmetric tensor and added asymmetric data! However, I took this example almost verbatim from the code examples in the documentation on the website for TensExpr. I believe it would be good to update the examples under TensExpr so that the declared symmetry matches the data. I would also have very much appreciated a simple explanation in the documentation to the BSGS function from the perspective of a non-mathematician. Second - as a student approaching GR from a physics perspective, the need to declare a symmetry for the indices is very non-intuitive, especially because I have never studied permutations formally, and am therefore completely unfamiliar with the BSGS formalism. Would it be possible to create tensors which have only the identity permutation by default? That way, those of us who are relatively less concerned with canonicalization can use tensors without worrying about conflicts between the data we put in and the symbolic representation. Third - I tried running my code with the symmetry definition you provided. The symbolic representation was correct, because I got >>> A_symm >>> A(i0, i1) + A(i1, i0) However, the data had the same problem as before, i.e., >>> A_symm.data >>> array([[0, 2], [4, 6]], dtype=object) The _TensorDataLazyEvaluator does not attempt to permute the indices in the _get function when working with a TensAdd expression. Instead, it just adds the indices in alphabetical order. So if you attempt >>> T(i0, i1, i2) = B(i0, i1, i2) + C(i1, i0, i2) + D(i2, i0, i1) >>> T(i0, i1, i2).data The result will be incorrect, but more specifically, assuming i0 = 'i0', etc, the _get function will return you the data for B(i0, i1, i2) + C(i0, i1, i2) + D(i0, i1, i2) I actually know how to fix this (indeed, when hacking around on my own computer I did fix it) by checking the index positions and judicious use of the np.swapaxes function. Covariant and partial differentiation I have actually implemented, in a hacky sort of way, but I'm running into bugs in the multiplication that I haven't tracked down yet. I guess what I want to know is - Are you interested in taking the current sympy.tensor.tensor module and fixing these bugs? I am willing to spend some time implementing/fixing the problems with the _TensorDataLazyEvaluator class, I just don't know what is planned. With regards to the diffgeom module, I found that to be unintuitive and even less documented than tensor. It was not really clear to me how I was supposed to define arbitrary tensors (i guess build them up from basevector fields? I didn't look into it much), and it is very very nice to have the Einstein index notation. Keep in mind that I am not any kind of GR theorist or Differential Geometer, I'm an experimentalist who just wants an easy way to convert equations into code. Therefore, if you can tell me what needs to be done, I'm willing to work on supporting Einstein Index Notation in sympy.tensor.tensor. -- You received this message because you are subscribed to the Google Groups "sympy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/sympy. To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/0add2ede-4170-444d-9401-4cf9ed05b594%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
