You can turn it off (caveat emptor); ; Changing reduce_limit to false will disable reduce_limit. ; If you think you're hitting reduce_limit with a "good" reduce function, ; please let us know on the mailing list so we can fine tune the heuristic. [query_server_config] reduce_limit = true
B. On Sun, Aug 16, 2009 at 4:49 PM, Paul Carey<[email protected]> wrote: > On Tue, May 5, 2009 at 8:50 PM, Brian Candler<[email protected]> wrote: >> On Mon, May 04, 2009 at 03:08:38PM -0700, Chris Anderson wrote: >>> I'm checking in a patch that should cut down on the number of mailing >>> list questions asking why a particular reduce function is hella slow. >>> Essentially the patch throws an error if the reduce function return >>> value is not at least half the size of the values array that was >>> passed in. (The check is skipped if the size is below a fixed amount, >>> 200 bytes for now). >> >> I think that 200 byte limit is too low, as I have now had to turn off the >> reduce_limit on my server for this: > > I'm using a reduce function to sort data in so that clients can query > for the most recent piece of data. For example > > function most_recent_reading-map(doc) { > if(doc.type === "TemperatureReading") { > emit(doc.station_id, doc); > } > } > > function most_recent_reading-reduce(keys, values) { > var sorted = values.sort(function (a,b) { > return b.created_at.localeCompare(a.created_at) > }); > return sorted[0]; > } > > The main reason I might to do this is to simplify client logic, but > another valid reason is to prevent sending and processing > unnecessarily large chunks of JSON. > > This kind of reduce function may fall foul of the > reduce_overflow_error, but only if the document is greater than 200 > bytes. So, I'm echoing the opinion that 200 bytes is too low. I also > believe that throwing an exception is a bit draconian as it could > result in an unjustified failure in production. I think a warning > would be more appropriate. > > Paul >
