On Wed, Jan 6, 2010 at 12:39 PM, Chris Anderson <[email protected]> wrote: >> Any way to get an insight as to how big the index is? I can see how >> big my database is (78M with ~11k docs) but I'd be curious to know how >> big that view is stored in memory. > > The view is stored on disk. Look in the CouchDB data directory > /usr/local/var/lib/couchdb for the view directory.
I only see the primary database file here, so I guess I get a feeling for the total size, but not what portion of that size is from the view. I suppose I could delete it, look at the size, then rebuild, comparing the growth? > Our reduce is not key-bounded, so [id array] would end up being the > list of unique ids in the entire database for full-reduce. Ok, that's kind of what I suspected. Are there any plans to offer multiple levels of mapping? It seems like it would still fit into pattern of individual updates and tree aggregation and could allow for fast recreation of these kind of indexes. Just a random question / idea.. > The storage inefficiency you describe is likely what would force you > from a pure Couch to a Lucene FTI solution first, as your data begins > to scale. Understood.. I'll take another look at the Lucene integration again.. how many people are using that?
