Hi Robert, Thanks very much for the reply. That makes sense.
I gather this means that if I'm running a single server, at least with today's code, commutative isn't required? If so, is that something I can count on? For example, if I know my application is quite small and will never be sharded, is it safe for me to use a non-commutative reduce? Thanks, Oliver On Tue, Dec 3, 2013 at 9:57 AM, Oliver Dain <[email protected]> wrote: > Because the order that we pass keys and values to the reduce function > is not defined. In sharded situations (like bigcouch, which is being > merged) an intermediate reduce value on an effectively random subset > of keys/values is generated at each node and a final rereduce is done > on all the intermediates. The constraints on reduce functions exist in > anticipation of clustering. > > B. > > > On 1 December 2013 21:45, Oliver Dain <[email protected]> wrote: > > Hey CouchDB users, > > > > I've just started messing around with CouchDB and I understand why CouchDB > > reduce functions need to be associative, but I don't understand why they > > also have to be commutative. I posted a much more detailed version of this > > question to StackOverflow yesterday, but haven't gotten an answer yet (my > > SO experience says that means I probably won't ever get one). Figured it > > might be smart to explicitly loop in the couch community. > > > > The original StackOverflow question is here: > > > > http://stackoverflow.com/questions/20303355/why-do-couchdb-reduce-functions-have-to-be-commutative > > > > Any thoughts would be appreciated! > > > > Thanks, > > Oliver > >
