On 15 April 2012 21:06, Mark Hahn <[email protected]> wrote:

>  >    would at least reach thousands, so fetching all keys is quite
> demanding
>
> My suggestion may well be the wrong path to take, but I'd like to point out
> that fetching thousands of keys is nothing.  Getting 16 kbytes of data
> takes a few ms.  And  internally couch has all the keys already sorted and
> ready to dump when you ask for it.  It's not like this 16K is going across
> the wire to the client.
>

Latency indeed shouldn't be an issue, but I do wonder about the amount of
CPU my particular scenario would use.
But...


>
> I use this kind of query all the time.  However, using a reduce would be
> much better.  You could keep a list of the ten lowest values found so far.
>  That is a finite amount of data and legal for a reduce.
>

...this is an interesting idea.
If I hard-code the limit into the reduce function, perhaps I could indeed
ignore the rest of the rows once I hit it. Since the rows I ignore in the
reduce should never be updated, maybe they won't be incorporated into
future reduce calculations, and thus not cost anything?
Also, how would group-reduce treat this scheme?

Reply via email to