On 01.03.20 21:58, p.shkadzko wrote:
**
Matrix!T matrixDotProduct(T)(Matrix!T m1, Matrix!T m2)
in
{
assert(m1.rows == m2.cols);
This asserts that the result is a square matrix. I think you want
`m1.cols==m2.rows` instead.
On Sunday, 1 March 2020 at 20:58:42 UTC, p.shkadzko wrote:
pragma(inline) static int toIdx(T)(Matrix!T m, in int i, in int
j)
{
return m.cols * i + j;
}
This is row-major order [1]. BTW: Why don't you make toIdx a
member of Matrix? It saves one parameter. You may also define
opIndex as
On Tuesday, 3 March 2020 at 10:25:27 UTC, maarten van damme wrote:
it is difficult to write an efficient matrix matrix
multiplication in any language. If you want a fair comparison,
implement your naive method in python and compare those timings.
[snip]
And of course there's going to be a
On Tuesday, 3 March 2020 at 10:25:27 UTC, maarten van damme wrote:
it is difficult to write an efficient matrix matrix
multiplication in any language. If you want a fair comparison,
implement your naive method in python and compare those timings.
Op di 3 mrt. 2020 om 04:20 schreef 9il via
it is difficult to write an efficient matrix matrix multiplication in any
language. If you want a fair comparison, implement your naive method in
python and compare those timings.
Op di 3 mrt. 2020 om 04:20 schreef 9il via Digitalmars-d-learn <
digitalmars-d-learn@puremagic.com>:
> On Sunday, 1
On Sunday, 1 March 2020 at 20:58:42 UTC, p.shkadzko wrote:
Hello again,
Thanks to previous thread on multidimensional arrays, I managed
to play around with pure D matrix representations and even
benchmark a little against numpy:
[...]
Matrix multiplication is about cache-friendly
On Monday, 2 March 2020 at 20:56:50 UTC, jmh530 wrote:
On Monday, 2 March 2020 at 20:22:55 UTC, p.shkadzko wrote:
[snip]
Interesting growth of processing time. Could it be GC?
+--+-+
| matrixDotProduct | time (sec.) |
+--+-+
| 2x[100 x
On Monday, 2 March 2020 at 20:22:55 UTC, p.shkadzko wrote:
[snip]
Interesting growth of processing time. Could it be GC?
+--+-+
| matrixDotProduct | time (sec.) |
+--+-+
| 2x[100 x 100]|0.01 |
| 2x[1000 x 1000] |2.21
On Sunday, 1 March 2020 at 20:58:42 UTC, p.shkadzko wrote:
Hello again,
Thanks to previous thread on multidimensional arrays, I managed
to play around with pure D matrix representations and even
benchmark a little against numpy:
[...]
Interesting growth of processing time. Could it be GC?
On Monday, 2 March 2020 at 18:17:05 UTC, p.shkadzko wrote:
[snip]
I tested @fastmath and @optmath for toIdx function and that
didn't change anyting.
@optmath is from mir, correct? I believe it implies @fastmath.
The latest code in mir doesn't have it doing anything else at
least.
On Monday, 2 March 2020 at 15:00:56 UTC, jmh530 wrote:
On Monday, 2 March 2020 at 13:35:15 UTC, p.shkadzko wrote:
[snip]
Thanks. I don't have time right now to review this thoroughly.
My recollection is that the dot product of two matrices is
actually matrix multiplication, correct? It
On Monday, 2 March 2020 at 13:35:15 UTC, p.shkadzko wrote:
[snip]
Thanks. I don't have time right now to review this thoroughly. My
recollection is that the dot product of two matrices is actually
matrix multiplication, correct? It generally makes sense to defer
to other people's
On Monday, 2 March 2020 at 11:33:25 UTC, jmh530 wrote:
On Sunday, 1 March 2020 at 20:58:42 UTC, p.shkadzko wrote:
Hello again,
[snip]
What compiler did you use and what flags?
Ah yes, sorry. I used latest ldc2 (1.20.0-x64) for Windows.
Dflags -mcpu=native and "inline", "optimize",
On Sunday, 1 March 2020 at 20:58:42 UTC, p.shkadzko wrote:
Hello again,
[snip]
What compiler did you use and what flags?
Hello again,
Thanks to previous thread on multidimensional arrays, I managed
to play around with pure D matrix representations and even
benchmark a little against numpy:
+-++---+
|
15 matches
Mail list logo