wohali commented on a change in pull request #268: Rewrite sharding 

 File path: src/cluster/sharding.rst
 @@ -18,43 +18,170 @@ Sharding
 .. _cluster/sharding/scaling-out:
-Scaling out
-Normally you start small and grow over time. In the beginning you might do just
-fine with one node, but as your data and number of clients grows, you need to
-scale out.
+`shard <https://en.wikipedia.org/wiki/Shard_(database_architecture)>`__
+is a horizontal partition of data in a database. Partitioning data into
+shards and distributing copies of each shard (called "replicas") to
+different nodes in a cluster gives the data greater durability against
+node loss. CouchDB clusters automatically shard and distribute data
+among nodes, but modifying cluster membership and customizing shard
+behavior must be done manually.
-For simplicity we will start fresh and small.
+Shards and Replicas
-Start ``node1`` and add a database to it. To keep it simple we will have 2
-shards and no replicas.
+How many shards and replicas each database has can be set at the global
+level, or on a per-database basis. The relevant parameters are *q* and
-.. code-block:: bash
+*q* is the number of database shards to maintain. *n* is the number of
+copies of each document to distribute. With q=8, the database is split
+into 8 shards. With n=3, the cluster distributes three replicas of each
+shard. Altogether, that's 24 shards for a single database. In a default
+3-node cluster, each node would receive 8 shards. In a 4-node cluster,
+each node would receive 6 shards.
-    curl -X PUT "http://xxx.xxx.xxx.xxx:5984/small?n=1&q=2"; --user daboss
+CouchDB nodes have a ``etc/default.ini`` file with a section named
+``[cluster]`` which looks like this:
-If you look in the directory ``data/shards`` you will find the 2 shards.
-.. code-block:: text
+    [cluster]
+    q=8
+    n=3
-    data/
-    +-- shards/
-    |   +-- 00000000-7fffffff/
-    |   |    -- small.1425202577.couch
-    |   +-- 80000000-ffffffff/
-    |        -- small.1425202577.couch
+These settings can be modified to set sharding defaults for all
+databases, or they can be set on a per-database basis by specifying the
+``q`` and ``n`` query parameters when the database is created. For
-Now, check the node-local ``_dbs_`` database. Here, the metadata for each
-database is stored. As the database is called ``small``, there is a document
-called ``small`` there. Let us look in it. Yes, you can get it with curl too:
+.. code:: bash
-.. code-block:: javascript
+    $ curl -X PUT "$COUCH_URL/database-name?q=4&n=2"
-    curl -X GET "http://xxx.xxx.xxx.xxx:5986/_dbs/small";
+That creates a database that is split into 4 shards and 2 replicas,
+yielding 8 shards distributed throughout the cluster.
+When a CouchDB cluster serves reads and writes, it proxies the request
+to nodes with relevant shards and responds once enough nodes have
+responded to establish
+`quorum <https://en.wikipedia.org/wiki/Quorum_(distributed_computing)>`__.
+The size of the required quorum can be configured at request time by
+setting the ``r`` parameter for document and view reads, and the ``w``
+parameter for document writes. For example, here is a request that
+specifies that at least two nodes must respond in order to establish
+.. code:: bash
+    $ curl "$COUCH_URL:5984/{docId}?r=2"
+Here is a similar example for writing a document:
+.. code:: bash
+    $ curl -X PUT "$COUCH_URL:5984/{docId}?w=2" -d '{}'
+Setting ``r`` or ``w`` to be equal to ``n`` (the number of replicas)
+means you will only receive a response once all nodes with relevant
+shards have responded, however even this does not guarantee `ACIDic
+consistency <https://en.wikipedia.org/wiki/ACID#Consistency>`__. Setting
+``r`` or ``w`` to 1 means you will receive a response after only one
+relevant node has responded.
+Adding a node
+To add a node to a cluster, first you must have the additional node
+running somewhere. Make note of the address it binds to, like
+````, then ``PUT`` an empty document to the ``/_node``
 Review comment:
   I'd use a domain name in an actual cluster; running multiple nodes on the 
same host kind-of defeats the purpose of a cluster and is only really done in 
dev/demo environments.

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

With regards,
Apache Git Services

Reply via email to