On 2008-09-25 10:21:05 -0400, Jan Lehnardt
<[EMAIL PROTECTED]> said:
a) "The value returned from a reduce should grow at a rate no bigger
than log(N) of values processed". This is why you see your view being
slow.
Where is this quote front?
b) the back-and forth of data between CouchDB and Spidermonkey
certainly takes some time, but it is not the limiting factor here. A
document is indexed only once. Results are cached. Your first query
will go through all your data, all subsequent queries will be lighting
fast.
If this is the case, then I'm certainly doing it wrong. Not really
sure how do cache these views with relaxdb, but I'll figure out
something.
See above, wrong conclusion. Treating databases as tables is a
terrible idea. Sorry RelaxDB folks. See
http://upstream-berlin.com/2008/09/25/a-couchdb-primer-for-an-activerecord-mindset/
for a discussion on different Ruby libs.
I'm not sure that RelaxDB does this exactly, but... why is it a bad
idea? In my case, I suspect that moving "blobs" to the attachments
will basically solve my problem, but it seems like you could use
different databases in the same way that you'd shard tables in a more
traditional setting.
What is the normal way of keeping the views in the database in sync
with the code? Is there some sort of "migration" concept that loads
the new views in, or is it more of a manual procesS?
Again, no big data-roundtrip problems. See above. With views, you
could do each dimension in a separate query and then intersect the two
results in your application. Or employ a GIS indexer / searcher over
the external indexing interface.
But that's not where I want to put my datafiltering logic! And in this
case, you'd get a lot of extranious crap -- I don't need to get points
of interest from rome if I'm looking for stuff in nyc (same latitude,
different world).
Where would I find information about the external indexing stuff?
Cheers
Jan
Thanks,
-w