Andrew Lazarus <[EMAIL PROTECTED]> writes:
> Because I know the 25 closest are going to be fairly close in each
> coordinate, I did try a multicolumn index on the last 6 columns and
> used a +/- 0.1 or 0.2 tolerance on each. (The 25 best are very probably inside
> that hypercube on the distribution
Andrew Lazarus wrote:
Because I know the 25 closest are going to be fairly close in each
coordinate, I did try a multicolumn index on the last 6 columns and
used a +/- 0.1 or 0.2 tolerance on each. (The 25 best are very probably inside
that hypercube on the distribution of data in question.)
Thi
Because I know the 25 closest are going to be fairly close in each
coordinate, I did try a multicolumn index on the last 6 columns and
used a +/- 0.1 or 0.2 tolerance on each. (The 25 best are very probably inside
that hypercube on the distribution of data in question.)
This hypercube tended to ha
Jeff Davis wrote:
On Fri, 2007-04-20 at 12:07 -0700, Andrew Lazarus wrote:
I have a table with 2.5 million real[] arrays. (They are points in a
time series.) Given a new array X, I'd like to find, say, the 25
closest to X in some sense--for simplification, let's just say in the
usual vector norm
On Fri, 2007-04-20 at 12:07 -0700, Andrew Lazarus wrote:
> I have a table with 2.5 million real[] arrays. (They are points in a
> time series.) Given a new array X, I'd like to find, say, the 25
> closest to X in some sense--for simplification, let's just say in the
> usual vector norm. Speed is cr
I have a table with 2.5 million real[] arrays. (They are points in a
time series.) Given a new array X, I'd like to find, say, the 25
closest to X in some sense--for simplification, let's just say in the
usual vector norm. Speed is critical here, and everything I have tried
has been too slow.
I im