On Sat, Jul 4, 2009 at 2:35 AM, Adam Kocoloski<[email protected]> wrote: > On Jul 3, 2009, at 6:37 PM, Chris Anderson wrote: >
>> >> I don't mean to be harsh but suggesting you have a performance problem >> here is like me complaining that my Ferrari makes a bad boat. >> >> Cheers, >> Chris > > Wow, that was unusually harsh coming from you, Chris. Taking a closer look > at Göran's map and reduce functions I agree that they should be reworked to > make use of group=true, but nevertheless I wonder if we do have something to > work on here. Yeah didn't mean to be upsetting. I do appreciate the performance heads-up from Göran. More importantly I think it's a documentation heads up. Maybe Futon could learn to recognize bad reduces and warn users before they start them... We go our of our way to say "don't do it like that", and we shouldn't optimize for cases that aren't supported. I think a performance fix for this experience would actually be a bad thing, as it would make the symptoms of a bad reduce more subtle. If you write a reduce that doesn't reduce, it should blow up in your face as soon as possible. Reduces that build a map (of say, unique words in a text) will grow very fast. CouchDB is likely doing the right thing by flushing all the time when hit with abusive reduces. If we can find cases where supported uses are leading to avoidable slowdowns, then I'm all for fixing it. I don't anticipate CouchDB will behave badly on good reduces, although the only real answer here would be benchmarking. Göran, did you see the "reduce_overflow_error" when writing this view? You'd have to using trunk to see it, but if you are using trunk and didn't get an explicit error, we've got work to do to make sure the error appears. Chris > I think the fundamental question is why the flush operations were occurring > so frequently the second time around. Is it because you were building up a > largish hash for the reduce value? Probably. Nevertheless, I'd like to > have a better handle on that. > > Adam > >>> The net effect is that the view update that took 1-2 seconds >>> suddenly takes 400 seconds or goes to a total crawl and never seems to >>> end. >>> >>> By looking at the log it obviously processes ONE doc at a time - giving >>> us >>> 2-5 emits typically and then tries to reduce that all the way up to the >>> root >>> before processing the next doc. So the rereduces for the internal nodes >>> will >>> be run typically in this case 1000x more than needed. >>> >>> Phew. :) Ok, so we are basically hosed with this behavior in this >>> situation. >>> I can only presume this has gone unnoticed because: >>> >>> a) Updates most of us do are small. But we dump thousands of new docs >>> using >>> bulk (a full new fiscal year of data for a given company) so we >>> definitely >>> notice it. >>> >>> b) Most reduce/rereduce functions are very, very fast. So it goes >>> unnoticed. >>> Our functions are NOT that fast - but if they were only run as they >>> should >>> (well, presuming they *should* only be run after all the emits for all >>> doc >>> changes in a given view update) it would indeed be fast anyway. We can >>> see >>> that since the first 1000 docs work fine. >>> >>> ...and thanks to the people on #couchdb for discussing this with me >>> earlier >>> today and looking at the Erlang code to try to figure it out. I think >>> Adam >>> Kocolski and Robert Newson had some idea about it. >>> >>> regards, Göran >>> >>> PS. I am on vacation now for 4 weeks, so I will not be answering much >>> email. >>> I wanted to get this posted though since it is in some sense a rather ... >>> serious performance bottleneck. >> >> >> >> -- >> Chris Anderson >> http://jchrisa.net >> http://couch.io > > -- Chris Anderson http://jchrisa.net http://couch.io
