Let's get all loosey goosey in terms of nomenclature.
Suppose that there is a general pattern of matrix multiply that I will call
mm. This general pattern is a function that takes a combiner function and
an aggregation function and it gives you back the function of two matrices
that you want.
A reasonable definition of this function in a made up functional language
is:
mm(combiner, aggregrator) = lambda(a, b) {
for (i, j) {
r[i,j] = aggregrator(map(combiner, {a[i,k], b[k, i] | k} )
}
return r
}
Normal linear algebra uses a product that looks like mm(*, +)
Shortest path, all points uses repeated functions like mm(+, max)
We might use honest to got matrix multiply for recs. Or we might use a
version that introduces weights. Or which sparsifies the final result. The
important point is the pattern of computation is the same.
On Wed, Sep 9, 2009 at 8:48 PM, Sean Owen <[email protected]> wrote:
> Anyway, like I said, it doesn't seem we're really doing a dot product
> to get the result matrix, but some other computation. But is it then a
> 'matrix multiplication'?
>
--
Ted Dunning, CTO
DeepDyve