Hi,
raku doesn't have matrix operations built into the language, so you're
probably refering to modules out of the ecosystem?
Math::Matrix seems to have everything implemented in pure raku, which
you should not expect to outperform pure python without some
optimization work.
Math::libgsl::Matrix is a NativeCall wrapper around the gnu scientific
library, which you would expect to be fast whenever big tasks can be
implemented as a single call into the library, and a bit slower whenever
data has to go back and forth between raku and the library, though
without measuring first, I can't say anything about the actual
performance numbers.
It would be quite important to see whether the "python performance of
matrix arithmetic" refers to NumPy, which i think has most of its code
implemented in fortran, which i would naively expect to outperform code
written in C.
Other than that, there's of course the @ operator in python which just
does matrix multiplication i think? You would have to look at CPython to
see how that is implemented.
On top of that, you'll of course also have to try the code in question
with PyPy, which is very good at making python code go fast, and perhaps
with Cython?
Hope this doesn't add too many more questions, and actually answers a
thing or two?
- Timo
On 09/02/2021 02:18, Parrot Raiser wrote:
There's a post online comparing Python's performance of matrix
arithmetic to C, indicating that Python's performance was 100x (yes, 2
orders of magnitude) slower than C's.
If I understand it correctly, matrix modules in Raku call GNU code
written in C to perform the actual work.
Does that make Raku significantly faster for matrix work, which is a
large part of many contemporary applications, such as AI and video
manipulation? If it does, that could be a big selling point.