Github user kxepal commented on a diff in the pull request:

    https://github.com/apache/couchdb-documentation/pull/19#discussion_r28236027
  
    --- Diff: src/cluster/sharding.rst ---
    @@ -0,0 +1,289 @@
    +.. Licensed under the Apache License, Version 2.0 (the "License"); you may 
not
    +.. use this file except in compliance with the License. You may obtain a 
copy of
    +.. the License at
    +..
    +..   http://www.apache.org/licenses/LICENSE-2.0
    +..
    +.. Unless required by applicable law or agreed to in writing, software
    +.. distributed under the License is distributed on an "AS IS" BASIS, 
WITHOUT
    +.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    +.. License for the specific language governing permissions and limitations 
under
    +.. the License.
    +
    +.. _cluster/sharding:
    +
    +========
    +Sharding
    +========
    +
    +.. _cluster/sharding/scaling-out:
    +
    +Scaling out
    +===========
    +
    +Normally you start small and grow over time. In the beginning you might do 
just
    +fine with one node, but as your data and number of clients grows, you need 
to
    +scale out.
    +
    +For simplicity we will start fresh and small.
    +
    +Start node1 and add a database to it. To keep it simple we will have 2 
shards
    +and no replicas.
    +
    +.. code-block:: bash
    +
    +    curl -X PUT "http://xxx.xxx.xxx.xxx:5984/small?n=1&q=2"; --user daboss
    +
    +If you look in the directory ``data/shards`` you will find the 2 shards.
    +
    +.. code-block:: text
    +
    +    data
    +        shards
    +            00000000-7fffffff
    +                small.1425202577.couch
    +            80000000-ffffffff
    +                small.1425202577.couch
    +
    +Now, go to the admin panel
    +
    +.. code-block:: text
    +
    +    http://xxx.xxx.xxx.xxx:5986/_utils
    +
    +and look in the database ``_dbs``, it is here that the metadata for each
    +database is stored. As the database is called small, there is a document 
called
    +small there. Let us look in it. Yes, you can get it with curl too:
    +
    +.. code-block:: javascript
    +
    +    curl -X GET "http://xxx.xxx.xxx.xxx:5986/_dbs/small";
    +
    +    {
    +        "_id": "small",
    +        "_rev": "1-5e2d10c29c70d3869fb7a1fd3a827a64",
    +        "shard_suffix": [
    +            46,
    +            49,
    +            52,
    +            50,
    +            53,
    +            50,
    +            48,
    +            50,
    +            53,
    +            55,
    +            55
    +            ],
    +        "changelog": [
    +        [
    +            "add",
    +            "00000000-7fffffff",
    +            "[email protected]"
    +        ],
    +        [
    +            "add",
    +            "80000000-ffffffff",
    +            "[email protected]"
    +        ]
    +        ],
    +        "by_node": {
    +            "[email protected]": [
    +                "00000000-7fffffff",
    +                "80000000-ffffffff"
    +            ]
    +        },
    +        "by_range": {
    +            "00000000-7fffffff": [
    +                "[email protected]"
    +            ],
    +            "80000000-ffffffff": [
    +                "[email protected]"
    +            ]
    +        }
    +    }
    +
    +* ``_id`` The name of the database.
    +* ``_rev`` The current revision of the metadata.
    +* ``shard_suffix`` The numbers after small and before .couch. The number of
    +  seconds after UNIX epoch that the database was created. Stored in ASCII.
    +* ``changelog`` Self explaining. Only for admins to read.
    +* ``by_node`` Which shards each node have.
    +* ``by_rage`` On which nodes each shard is.
    +
    +Nothing here, nothing there, a shard in my sleeve
    +-------------------------------------------------
    +
    +Start node2 and add it to the cluster. Check in ``/_membership`` that the
    +nodes are talking with each other.
    +
    +If you look in the directory ``data`` on node2, you will see that there is 
no
    +directory called shards.
    +
    +Go to Fauxton and edit the metadata for small, so it looks like this:
    +
    +.. code-block:: javascript
    +
    +    {
    +        "_id": "small",
    +        "_rev": "1-5e2d10c29c70d3869fb7a1fd3a827a64",
    +        "shard_suffix": [
    +            46,
    +            49,
    +            52,
    +            50,
    +            53,
    +            50,
    +            48,
    +            50,
    +            53,
    +            55,
    +            55
    +        ],
    +        "changelog": [
    +        [
    +            "add",
    +            "00000000-7fffffff",
    +            "[email protected]"
    +        ],
    +        [
    +            "add",
    +            "80000000-ffffffff",
    +            "[email protected]"
    +        ],
    +        [
    +            "add",
    +            "00000000-7fffffff",
    +            "[email protected]"
    +        ],
    +        [
    +            "add",
    +            "80000000-ffffffff",
    +            "[email protected]"
    +        ]
    +        ],
    +        "by_node": {
    +            "[email protected]": [
    +                "00000000-7fffffff",
    +                "80000000-ffffffff"
    +            ],
    +            "[email protected]": [
    +                "00000000-7fffffff",
    +                "80000000-ffffffff"
    +            ]
    +        },
    +        "by_range": {
    +            "00000000-7fffffff": [
    +                "[email protected]",
    +                "[email protected]"
    +            ],
    +            "80000000-ffffffff": [
    +                "[email protected]",
    +                "[email protected]"
    +            ]
    +        }
    +    }
    +
    +Then press Save and marvel at the magic. The shards are now on node2 too! 
We
    +now have ``n=2``!
    +
    +If the shards are large, then you can copy them over manually and only have
    +CouchDB syncing the changes from the last minutes instead.
    +
    +.. _cluster/sharding/move:
    +
    +Moving Shards
    +=============
    +
    +When you get to ``n=3`` you should start moving the shards instead of 
adding
    +more replicas.
    +
    +.. warning::
    +    If you have ``n<2`` then you have to stop all writes to the database 
or you
    +    will *LOSE DATA*!
    --- End diff --
    
    Really loose? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to