On Sunday, November 2, 2014 1:16:35 AM UTC+1, Jamie Titus wrote:
>
>
>
> On Saturday, November 1, 2014 5:01:04 PM UTC-7, Francesco Bonazzi wrote:
>>
>> Apart from the bug fixes, it's quite hard to go on rigorously.
>>
>> You wrote you have some hackish stuff about derivatives, on github you 
>> can create a gist, which is usually a piece of code unrelated to a project, 
>> which people can see and comment.
>>
>
> I have made a gist with some code I wrote to do derivatives. 
> https://gist.github.com/seniosh/ef0e1550110eec0c6b41#file-grtensor-py 
> where can that then be discussed? I already realize several problems 
> (partial differentiation gets treated as a Tensor, which is a problem if 
> you want to switch coordinates), but it's a
> starting point for discussion at least. 
>

Component operations only are already partly implemented in *sympy.diffgeom*, 
tensors there are fully-covariant only, unfortunately. It would be clean to 
share the same algorithm between those two modules, as the operations are 
practically the same when acting on components, *sympy.diffgeom* simply has 
a different way to show the same objects to the end-user.

Furthermore, in *sympy.tensor.tensor* one may simply wish to manipulate 
tensor formulae without any knowledge of the components of the tensors 
being handled. Maybe you would like to express the covariant derivative of 
a tensor whose components are unknown to you.

If you want to operate on components of tensors I suggest you to look 
closer to *sympy.diffgeom*, you may be unfamiliar with its syntax, but it's 
not that complicated to learn it. It's just a different syntax to express 
the same things concerning components of tensors. The function 
*twoform_to_matrix* clearly illustrates this kind of correspondence, where 
you transform a summation of tensor products to a matrix (matrix = 2-rank 
tensor without valence markings). Try to play with that.

 
>
>> If you realized, the tensor module only allows expanded polynomial 
>> expressions, that is difficultto change because the algorithm was written 
>> for that, and supporting something different would require a llot of 
>> rewrite.
>>
>> I have a pending PR on the tensor module, which is a major rewrite of the 
>> code, but I'm not very satisfied of it.
>>
> Should I move discussion to that PR? Or should I make a new one?
>

My pending PR is https://github.com/sympy/sympy/pull/7762

I am not happy with it, and I think I am going to close it, because the 
performance loss is significant.

The ideas I introduced with it are still valid. The current tensor module 
handles polynomial expressions of tensors, that is TensMul and TensAdd. 
TensMul contains a list of TensorHead and TensorIndex objects, while 
TensAdd is a list of TensMul objects.

All operations, that is, multiplication, addition, canonicalization, assume 
this strict structure, TensMul is supposed to have a defined index order 
while TensAdd does have it. All operations assume this strictly, so 
introducing objects such as CovarDerivative requires major rewrites, and 
that's the reason why my PR has been pending for a long time.

My idea is to introduce the following API which all classes inheriting 
TensExpr should have:

   - .has_index_order()  specifying whether the expression has an index 
   order, A(i0, i1) has it, A(i0, i1) + B(i1, i0) does not have it.
   - .free_indices_list() returns a list of the free indices if the order 
   exists, otherwise raises an error.
   - .free_indices_set() returns a set (=collection without order) of 
   indices, should always work.
   - equivalent expressions for dummy indices.
   - indices symmetry information and methods should raise an error if 
   called on expressions without index order. Currently they are all 
   implemented on TensMul, but future objects such as CovarDerivative may also 
   need all those methods.
   - no abstract class related to index order: TensMul should be 
   expressible in forms without index order, such as A(i)*(B(j, k) + C(k, j)), 
   it is not a class-feature.
   
All current algorithms assume that TensMul always has index order and that 
it's the only one allowed to possess it. Obviously, if we introduce 
CovarDerivative(i0, A(i1, i1)), this should be exposed as a tensor with 
index order with indices [i0, i1, i2], which is not a TensMul.

 
>
>> An alternative is to expose special objects such as partial derivatives 
>> to the index structure of a tensor.
>>
>> If you like to do something, try to see how to create a covariant 
>> derivative object that exposes the index structure.
>>
>> Say, CovarDerivative(i, A(j, k)) should be treated as a tensor of indices 
>> i j k. You will realize that the way the tensor module was written makes 
>> this task quite hard.
>>
>
> When I implemented it, my CovarDerivative function operated on data, and 
> then returned a new TensMul object, so CovarDerivative(i, A(j, k)) returns 
> dA(i, j, k).  
>

Operation on data would fit more on sympy.diffgeom, which already supports 
partial, covariant and Lie derivatives. If you wish to produce a fast 
implementation of tensor objects as multidimensional arrays with valence 
markings, I would suggest to create a new class, say *TensorArray*, inside 
*sympy.diffgeom*, providing a compact way to represent tensor components 
and information on the coordinate system you're working on, and valence 
(i.e. covariantness/contravariantness) markings.

Supplying *sympy.tensor.tensor* with all of its missing features is much 
harder and requires a lot of time, but if you wish to play with that, I can 
support you. 

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/d4f072ee-1930-4b59-92b3-7ca57efcbe3e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to