On Thursday, 19 May 2016 at 12:04:31 UTC, Andrei Alexandrescu wrote:
On 5/19/16 4:12 AM, Jens Müller wrote:

if (a.length == 0 || b.length == 0)
    return 0;
const amax = a.length - 1, bmax = b.length - 1;
size_t i,j = 0;
double sum = 0;
for (;;)
    if (a[i].index < b[j].index) {
        if (i++ == amax) break;
    else if (a[i].index > b[j].index) {
        bumpJ: if (j++ == bmax) break;
        assert(a[i].index == b[j].index);
        sum += a[i].value * b[j].value;
        if (i++ == amax) break;
        goto bumpJ;
return sum;

Then if you add the sentinel you only need the bounds tests in the third case.

I'm not seeing it. Let me explain.
Consider the input a = [1] and b = [2, 3] (I only write the indices). The smallest back index is 1, i.e., a.back is the chosen sentinel. Now I assume that we set b.back to a.back restoring it after the loop. Now in the case a[i].index < b[j].index I have to check whether a[i].index == a.back.index to break because otherwise i is incremented (since a[i].index = 1 and b[j].index = 2, for i = 0 and j = 0 respectively). In the last case I only check a[i].index == a.back.index, since this implies b[j].index == a.back.index. So in sum I have two bounds tests. But I think this is not what you are thinking of.
This does not look right.
Here are the plots for the implementations.

dot1 is my baseline, which is indeed worse than your baseline (dot2). But only on gdc. I choose dot2 as the baseline for computing the speedup. dot3 is the sentinel version.

I removed the code to optimize for large gaps. Because it is only confusing. I may generate some benchmark data with larger gaps later to see whether it is worthwhile for such data.
It looks much more regular now (ldc is still strange).


Reply via email to