Bob, can you explain more, I should have said update_notification, I have an external query handler and a update notification for updating the index. The problem is that the data gets into the database quickly I perform an external query and the update handler hasn't had time to work.
I am using _changes in the update_notification listener to get the doc, hence the problem is amplified as this gives me more docs than I was originally updated about (since the writes and handlers are not in sync). If I am understanding you I could set up a process that listens to _changes, could I bring this process under the control of erlang as per an update notification handler? thanks, Norman On Tue, Jul 27, 2010 at 3:41 PM, Robert Newson <[email protected]> wrote: > reading _changes instead of using the (deprecated?) externals feature > would avoid the problem? > > B. > > On Tue, Jul 27, 2010 at 10:37 PM, J Chris Anderson <[email protected]> wrote: >> >> On Jul 27, 2010, at 2:30 PM, Norman Barker wrote: >> >>> Hi, >>> >>> I have written couchdb-clucene >>> (http://github.com/normanb/couchdb-clucene) and am doing a lot of >>> testing with heavy datasets where I am sending a bulk doc request with >>> 10 docs at a time, a couple of these every second for a couple of >>> minutes. >>> >>> Very quickly couchdb backs up and hogs the cpu since the database >>> commit and return doesn't wait for an external handler to do its job. >>> The model of fire and forget is fine and I like it, very similar to >>> JMS, however since the external process is a singleton it has to be >>> very quick to keep up with load or the system slowly backs up. >>> >>> Is there a way to either define a pool of externals, or to change the >>> default behaviour from fire and forget? >>> >> >> Yes, but involves feature development for CouchDB. Essentially we need the >> externals protocol to be non-blocking. There is a thread on dev@ that >> touches on this. I'm not sure who wants to own making the patch, but the >> technical requirements are pretty well known. >> >> Thank you for working on something so awesome! >> >> Chris >> >>> thanks, >>> >>> Norman >> >> >
