On Mon, Jan 24, 2011 at 05:33, Paul Hirst <[email protected]> wrote: > On Mon, 2011-01-24 at 11:26 +0000, Randall Leeds wrote: >> On Mon, Jan 24, 2011 at 01:01, Paul Hirst <[email protected]> wrote: >
[*snip] >> >> I have two ideas if you need an alternative, but it depends on what >> you're trying to avoid. >> If you cannot deal with waiting for the new index to generate before >> querying it, create the new views in a separate design doc. Query that >> and wait for it to build. Once it has finished, rename the design >> document (update the old one) and your views should be "pre-indexed". > > This is actually what I did on the backup server anyway because it's > replicated to the live server. > Cool. It's a great trick. > Which does bring me to another question. If you accidentally  trigger an > index rebuild is there any way to stop it short of restarting couchdb? > None, as far as I'm aware. If it's not already in JIRA you could file a ticket for an enhancement. Possibly updating the design document again, to restore it to its old state, would stop the index, but I'm not sure. If not, I would say that's a bug. Also, if you haven't run _view_cleanup the old index would still be there. >> >> If you cannot deal with the load generated by indexing itself, you >> could create a remote query server. Be sure that the CouchDB user can >> SSH without password and add ssh to the beginning of your query server >> command. > > [snip] > >> >> If all if this makes perfect sense, you can go ahead and give it a >> shot. If it sounds terrifying, lets talk about it or catch me on IRC >> (tilgovi). This is the first time I've recommended anything like this >> be tried, so it probably deserves some close inspection before blindly >> listening to a word :). > > This all makes sense but I'm worried it won't solve the problem. The CPU > load from the couchjs process doesn't seem particularly significant in > my case. When I have rebuilt indexes on the live server before it seemed > it was the disk IO which slowed everything down. My database currently > stands at 22 million documents and 528G in size and I guess that's a lot > of disk seeks when reading the documents and writing out the new index > file. So pushing the javascript execution over the network and onto > another box presumably won't help with that. However, I am a bit of a > newbie still so if I've misunderstood I'd love to be put right. You're right. My answer got long and it got late, but I meant to ask what the bottleneck was. > > I think what I shall do in this case, is fail over to my backup server, > do a compact on what was the live server and then trigger an index > build. Then I can fail back again. I already do this for compacting > purposes and it seems I have a similar sort of problem here really. > This seems like your best option for now. I hope it works out for you! Randall
