The Github issue tracker requested that I make a mailing list thread, so this 
is a cross-post of ENH#28394.

einsum_path pre-analyzes the best contraction strategy for an Einstein sum; 
fine. einsum can accept that contraction strategy; also fine. I imagine that 
this works well in the case that there are very few, very large calls to einsum.

Where this breaks down is the case where the input is smaller or the calls are 
more frequent, making the overhead of the Python side of einsum dominant when 
optimize is not False. This effect can get really bad and easily overwhelm any 
benefit of the planned einsum_path contraction.

Another issue is that the planned contraction implies a specific subscript 
string, but that string needs to be passed again to einsum; this API allows for 
a mismatch between the subscript and optimize arguments.

I can imagine a few solutions, but the one that's currently my favourite is a 
simple loop evaluation -> AST -> compiled set of calls to tensordot and 
c_einsum. In my testing this solves all of the problems I described above.

In the Github issue I posted example code for an implementation that uses 
`ast`. I can post it here as well on request, though I don't know whether it 
will have syntax highlighting etc.
_______________________________________________
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com

Reply via email to