I constantly see "one DB per user" being proposed as a solution. But I'm not entirely convinced whether this will truly work for a large scale setup.
The reason people choose CouchDB is for high scale use where one could potentially end up with a million users. Then, what good is a database hat only relies on the underlying filesystem to do the job of index keeping? If there are a million "*.couch" files under var/lib/couchdb/, I'd expect the performance to be very poor / unpredictable since it now depends on the underlying file system's logic. How can this be partitioned? What is the "right" way to handle million users with need for isolated documents within each DB? How will replication solutions cope in distributing these million databases? 2 million replicating connections between two servers doesn't sound right. Regards, -Suraj -- An Onion is the Onion skin and the Onion under the skin until the Onion Skin without any Onion underneath. -- _____________________________________________________________ The information contained in this communication is intended solely for the use of the individual or entity to whom it is addressed and others authorized to receive it. It may contain confidential or legally privileged information. If you are not the intended recipient you are hereby notified that any disclosure, copying, distribution or taking any action in reliance on the contents of this information is strictly prohibited and may be unlawful. If you have received this communication in error, please notify us immediately by responding to this email and then delete it from your system. The firm is neither liable for the proper and complete transmission of the information contained in this communication nor for any delay in its receipt.
