Hi Nakul,

The paper I referenced uses the term "Boolean matrix", but really it
applies to any matrix product in which two properties hold:

   1. Zero product property.  a * b = 0 ==> a = 0 or b = 0
   2. Few, if any, "sums to zero".  There should be few cases of nonzero
   partial products summing to zero.

Property #1 holds for standard arithmetic * over the reals.  If we multiply
matrices with positive entries, then there are no sums to zero.  Otherwise,
if there are many positive and negative entries, the number of nonzero
entries in the result may be fewer than the paper's algorithm would predict.

Specifically, under these two properties, nnz( A * B ) = nnz( ||A||_0 *
||B||_0 ), where ||A||_0 is the "zero norm", which maps nonzero entries to
1 and zero entries to 0.

I don't know about the origin of the current heuristic for SystemML.

Cheers, Dylan


On Tue, Jun 20, 2017 at 2:00 PM, Nakul Jindal <naku...@gmail.com> wrote:

> Thank you Dylan. The paper seems interesting. The abstract reads as follows
> : "We consider the problem of doing fast and reliable estimation of the
> number z of non-zero entries in a sparse boolean matrix product"...
> I think they maybe doing this for boolean matrices. The code that I pointed
> to in SystemML is used for all matrices.
> I am not sure if and how their technique translates to non-boolean
> matrices.
>
> Also, I am asking for an explanation of what is implemented as opposed to
> what can be.
> I was wondering if the neat looking formula has been gotten from some
> literature somewhere?
>
> -Nakul
>
>
>
>
> On Tue, Jun 20, 2017 at 1:42 PM, Dylan Hutchison <
> dhutc...@cs.washington.edu
> > wrote:
>
> > In case it is helpful, this 2010 paper
> > <https://www.semanticscholar.org/paper/Better-Size-
> > Estimation-for-Sparse-Matrix-Products-Amossen-Campagna/
> > 52f70406bb4d887e1b79af81a746184c5723848a>
> > is my favorite for estimating the nnz of a matrix product with high
> > confidence.  The algorithm is much more involved than the simple SystemML
> > calculation, because it looks at samples from the actual data.
> >
> > Here are some notes I have on the paper:
> >
> > There is a sketch algorithm [2] that gives a (1 + ε) approximation z̃ to
> z
> > for any ε > 4 / (z^.25) in O(m)
> > time and with a bound on the number of I/Os. In relational algebra, this
> is
> > estimating
> > |π_{ik}( A(i, j) ⋈ B(j, k) )|. The idea behind the algorithm is finding,
> > for some tuning parameter
> > k that varies with z, the k-smallest value of a hash function h(i; k).
> The
> > larger this value is, the more
> > is and ks that match for a given j. Repeat this for all js. Different (i,
> > k)s contribute to different entries
> > in the result matrix and have different hash values. A sketch for this
> > algorithm can be incrementally
> > maintained via independent samples on is and ks. Computing the z̃
> estimate
> > from the sample takes o(n)
> > time for large enough z. The larger z is, the smaller a sample size we
> > need. (Need large samples for
> > small z.) (Everything above assumes the zero product property and
> > zero-sum-free).
> >
> >
> >
> > On Tue, Jun 20, 2017 at 12:06 PM, Nakul Jindal <naku...@gmail.com>
> wrote:
> >
> > > Hi,
> > >
> > > There is code in systemml to “estimate” the output sparsity of a matrix
> > > multiplication operation between two sparse matrices.
> > > Is there a citation for this? If not, have we done any tests to figure
> > out
> > > how good these estimates are?
> > > https://github.com/nakul02/systemml/blob/df8d4a63d8d09cae94b
> > > 6ca2634e31da554302c72/src/main/java/org/apache/sysml/
> > > hops/OptimizerUtils.java#L944-L953
> > >
> > > Thanks,
> > > Nakul
> > >
> >
>

Reply via email to