[couchdb-documentation] branch final-wiki-conversion updated (4e3f335 -> 2abc317)

2018-12-20 Thread wohali
This is an automated email from the ASF dual-hosted git repository.

wohali pushed a change to branch final-wiki-conversion
in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git.


 discard 4e3f335  Last wiki migrations; rework of TOC & headers
 add 2abc317  Last wiki migrations; rework of TOC & headers

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (4e3f335)
\
 N -- N -- N   refs/heads/final-wiki-conversion (2abc317)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 src/index.rst | 4 
 1 file changed, 4 insertions(+)



[couchdb-documentation] branch final-wiki-conversion updated (c9b8833 -> 4e3f335)

2018-12-20 Thread wohali
This is an automated email from the ASF dual-hosted git repository.

wohali pushed a change to branch final-wiki-conversion
in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git.


 discard c9b8833  Last wiki migrations; rework of TOC & headers
 add 4e3f335  Last wiki migrations; rework of TOC & headers

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (c9b8833)
\
 N -- N -- N   refs/heads/final-wiki-conversion (4e3f335)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 8 
 1 file changed, 8 insertions(+)



[couchdb-documentation] branch final-wiki-conversion updated (b26f4a6 -> c9b8833)

2018-12-20 Thread wohali
This is an automated email from the ASF dual-hosted git repository.

wohali pushed a change to branch final-wiki-conversion
in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git.


 discard b26f4a6  Last wiki migrations; rework of TOC & headers
 add c9b8833  Last wiki migrations; rework of TOC & headers

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (b26f4a6)
\
 N -- N -- N   refs/heads/final-wiki-conversion (c9b8833)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 src/index.rst | 1 +
 1 file changed, 1 insertion(+)



[couchdb-documentation] branch final-wiki-conversion updated (3947c08 -> b26f4a6)

2018-12-20 Thread wohali
This is an automated email from the ASF dual-hosted git repository.

wohali pushed a change to branch final-wiki-conversion
in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git.


 discard 3947c08  Last wiki migrations; rework of TOC & headers
 add b26f4a6  Last wiki migrations; rework of TOC & headers

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (3947c08)
\
 N -- N -- N   refs/heads/final-wiki-conversion (b26f4a6)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 src/index.rst | 3 +++
 1 file changed, 3 insertions(+)



[couchdb-documentation] 01/01: Last wiki migrations; rework of TOC & headers

2018-12-20 Thread wohali
This is an automated email from the ASF dual-hosted git repository.

wohali pushed a commit to branch final-wiki-conversion
in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git

commit 3947c08ac5aa803f22f3cb862687925365004742
Author: Joan Touzet 
AuthorDate: Thu Dec 20 18:36:32 2018 -0500

Last wiki migrations; rework of TOC & headers
---
 ext/configdomain.py |  4 ++--
 ext/httpdomain.py   |  2 +-
 src/api/local.rst   |  5 ++---
 src/cluster/index.rst   |  6 +++---
 src/conf.py |  6 --
 src/config/index.rst|  6 +++---
 src/cve/index.rst   |  6 +++---
 src/ddocs/views/intro.rst   | 14 ++
 src/ddocs/views/joins.rst   | 29 +++-
 src/index.rst   | 42 +
 src/install/troubleshooting.rst | 18 ++
 src/intro/api.rst   |  3 ++-
 src/intro/overview.rst  | 14 +++---
 src/maintenance/index.rst   |  6 +++---
 src/replication/conflicts.rst   | 42 +
 src/replication/index.rst   |  2 +-
 src/setup/cluster.rst   |  8 
 src/whatsnew/index.rst  |  7 ---
 templates/layout.html   |  6 +++---
 templates/pages/index.html  |  4 ++--
 20 files changed, 152 insertions(+), 78 deletions(-)

diff --git a/ext/configdomain.py b/ext/configdomain.py
index fae52a9..66ed532 100644
--- a/ext/configdomain.py
+++ b/ext/configdomain.py
@@ -57,8 +57,8 @@ class ConfigObject(ObjectDescription):
 class ConfigIndex(Index):
 
 name = "ref"
-localname = "Configuration Reference"
-shortname = "Config Reference"
+localname = "Configuration Quick Reference"
+shortname = "Config Quick Reference"
 
 def generate(self, docnames=None):
 content = dict(
diff --git a/ext/httpdomain.py b/ext/httpdomain.py
index 6354b24..5e8803d 100644
--- a/ext/httpdomain.py
+++ b/ext/httpdomain.py
@@ -495,7 +495,7 @@ class HTTPXRefRole(XRefRole):
 class HTTPIndex(Index):
 
 name = "api"
-localname = "HTTP API Reference"
+localname = "API Quick Reference"
 shortname = "API Reference"
 
 def generate(self, docnames=None):
diff --git a/src/api/local.rst b/src/api/local.rst
index c261721..0698ae2 100644
--- a/src/api/local.rst
+++ b/src/api/local.rst
@@ -54,9 +54,8 @@ A list of the available methods and URL paths are provided 
below:
 
 .. _api/local/doc:
 
-
 ``/db/_local_docs``
-
+===
 
 .. http:get:: /{db}/_local_docs
 :synopsis: Returns a built-in view of all local (non-replicating) documents
@@ -227,7 +226,7 @@ A list of the available methods and URL paths are provided 
below:
 }
 
 ``/db/_local/id``
-
+=
 
 .. http:get:: /{db}/_local/{docid}
 :synopsis: Returns the latest revision of the local document
diff --git a/src/cluster/index.rst b/src/cluster/index.rst
index 93973d0..46e569a 100644
--- a/src/cluster/index.rst
+++ b/src/cluster/index.rst
@@ -12,9 +12,9 @@
 
 .. _cluster:
 
-=
-Cluster Reference
-=
+==
+Cluster Management
+==
 
 As of CouchDB 2.0.0, CouchDB can be run in two different modes of operation:
 * Standalone
diff --git a/src/conf.py b/src/conf.py
index 8ada770..a0cb15e 100644
--- a/src/conf.py
+++ b/src/conf.py
@@ -36,9 +36,11 @@ nitpicky = True
 version = "2.3"
 release = "2.3.0"
 
-project = "Apache CouchDB"
+project = u"Apache CouchDB\u00ae"
 
-copyright = "%d, %s" % (datetime.datetime.now().year, "Apache Software 
Foundation")
+copyright = u"%d, %s" % (datetime.datetime.now().year, \
+u"Apache Software Foundation. CouchDB\u00ae is a registered trademark 
of the " + \
+u"Apache Software Foundation")
 
 primary_domain = "http"
 
diff --git a/src/config/index.rst b/src/config/index.rst
index 64b03bf..492f23d 100644
--- a/src/config/index.rst
+++ b/src/config/index.rst
@@ -12,9 +12,9 @@
 
 .. _config:
 
-===
-Configuring CouchDB
-===
+=
+Configuration
+=
 
 .. toctree::
 :maxdepth: 2
diff --git a/src/cve/index.rst b/src/cve/index.rst
index d55fec8..8807d04 100644
--- a/src/cve/index.rst
+++ b/src/cve/index.rst
@@ -12,9 +12,9 @@
 
 .. _cve:
 
-===
-Security Issues Information
-===
+==
+Security Issues / CVEs
+==
 
 .. toctree::
 :maxdepth: 1
diff --git a/src/ddocs/views/intro.rst b/src/ddocs/views/intro.rst
index 1596145..8ac6c82 100644
--- a/src/ddocs/views/intro.rst
+++ b/src/ddocs/views/intro.rst
@@ -178,6 +178,20 @@ confusion. CouchDB automatically includes the document ID 
of the document that
 created the entry in the view result. We’ll use this as well when constructing
 links to the blog post pages.
 
+.. 

[couchdb-documentation] branch final-wiki-conversion created (now 3947c08)

2018-12-20 Thread wohali
This is an automated email from the ASF dual-hosted git repository.

wohali pushed a change to branch final-wiki-conversion
in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git.


  at 3947c08  Last wiki migrations; rework of TOC & headers

This branch includes the following new commits:

 new 3947c08  Last wiki migrations; rework of TOC & headers

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[couchdb] branch master updated: Remove shim couch_replicator_manager module

2018-12-20 Thread vatamane
This is an automated email from the ASF dual-hosted git repository.

vatamane pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/couchdb.git


The following commit(s) were added to refs/heads/master by this push:
 new e97f029  Remove shim couch_replicator_manager module
e97f029 is described below

commit e97f0297234b03b9b4a9904c7403b703a2a9a735
Author: Kyle Snavely 
AuthorDate: Thu Dec 20 16:37:13 2018 -0500

Remove shim couch_replicator_manager module
---
 src/couch/src/couch_server.erl | 28 +++---
 .../src/couch_replicator_manager.erl   | 29 --
 .../test/couch_replicator_modules_load_tests.erl   | 45 --
 src/fabric/src/fabric_doc_update.erl   |  2 +-
 4 files changed, 15 insertions(+), 89 deletions(-)

diff --git a/src/couch/src/couch_server.erl b/src/couch/src/couch_server.erl
index c4b7bf1..619ef08 100644
--- a/src/couch/src/couch_server.erl
+++ b/src/couch/src/couch_server.erl
@@ -160,20 +160,20 @@ maybe_add_sys_db_callbacks(DbName, Options) ->
 IsUsersDb = path_ends_with(DbName, "_users")
 orelse path_ends_with(DbName, UsersDbSuffix),
 if
-   DbName == DbsDbName ->
-   [sys_db | Options];
-   DbName == NodesDbName ->
-   [sys_db | Options];
-   IsReplicatorDb ->
-   [{before_doc_update, fun 
couch_replicator_manager:before_doc_update/2},
-{after_doc_read, fun couch_replicator_manager:after_doc_read/2},
-sys_db | Options];
-   IsUsersDb ->
-   [{before_doc_update, fun couch_users_db:before_doc_update/2},
-{after_doc_read, fun couch_users_db:after_doc_read/2},
-sys_db | Options];
-   true ->
-   Options
+DbName == DbsDbName ->
+[sys_db | Options];
+DbName == NodesDbName ->
+[sys_db | Options];
+IsReplicatorDb ->
+[{before_doc_update, fun 
couch_replicator_docs:before_doc_update/2},
+ {after_doc_read, fun couch_replicator_docs:after_doc_read/2},
+ sys_db | Options];
+IsUsersDb ->
+[{before_doc_update, fun couch_users_db:before_doc_update/2},
+ {after_doc_read, fun couch_users_db:after_doc_read/2},
+ sys_db | Options];
+true ->
+Options
 end.
 
 path_ends_with(Path, Suffix) when is_binary(Suffix) ->
diff --git a/src/couch_replicator/src/couch_replicator_manager.erl 
b/src/couch_replicator/src/couch_replicator_manager.erl
deleted file mode 100644
index afccc0b..000
--- a/src/couch_replicator/src/couch_replicator_manager.erl
+++ /dev/null
@@ -1,29 +0,0 @@
-% Licensed under the Apache License, Version 2.0 (the "License"); you may not
-% use this file except in compliance with the License. You may obtain a copy of
-% the License at
-%
-%   http://www.apache.org/licenses/LICENSE-2.0
-%
-% Unless required by applicable law or agreed to in writing, software
-% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-% License for the specific language governing permissions and limitations under
-% the License.
-
--module(couch_replicator_manager).
-
-% TODO: This is a temporary proxy module to external calls (outside replicator)
-%  to other replicator modules. This is done to avoid juggling multiple repos
-% during development.
-
-% NV: TODO: These functions were moved to couch_replicator_docs
-% but it is still called from fabric_doc_update. Keep it here for now
-% later, update fabric to call couch_replicator_docs instead
--export([before_doc_update/2, after_doc_read/2]).
-
-
-before_doc_update(Doc, Db) ->
-couch_replicator_docs:before_doc_update(Doc, Db).
-
-after_doc_read(Doc, Db) ->
-couch_replicator_docs:after_doc_read(Doc, Db).
diff --git a/src/couch_replicator/test/couch_replicator_modules_load_tests.erl 
b/src/couch_replicator/test/couch_replicator_modules_load_tests.erl
deleted file mode 100644
index a552d14..000
--- a/src/couch_replicator/test/couch_replicator_modules_load_tests.erl
+++ /dev/null
@@ -1,45 +0,0 @@
-% Licensed under the Apache License, Version 2.0 (the "License"); you may not
-% use this file except in compliance with the License. You may obtain a copy of
-% the License at
-%
-%   http://www.apache.org/licenses/LICENSE-2.0
-%
-% Unless required by applicable law or agreed to in writing, software
-% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-% License for the specific language governing permissions and limitations under
-% the License.
-
--module(couch_replicator_modules_load_tests).
-
--include_lib("couch/include/couch_eunit.hrl").
-
-
-modules_load_test_() ->
-{
-"Verify that all modules loads",
-should_load_modules()
-}.
-
-
-should_load_modules() ->
- 

[Couchdb Wiki] Update of "EntityRelationship" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Couchdb Wiki" for 
change notification.

The "EntityRelationship" page has been changed by JoanTouzet:
https://wiki.apache.org/couchdb/EntityRelationship?action=diff=23=25

- <>
+ Moved to http://docs.couchdb.org/en/stable/ddocs/views/nosql.html
  
- = Modeling Entity Relationships in CouchDB =
- <>
- 
- This page is mostly a translation of Google's 
[[http://code.google.com/appengine/articles/modeling.html|Modeling Entity 
Relationships]] article in CouchDB terms. It could use more code examples and 
more examples of actual output. Since this is a wiki, feel free to update this 
document to make things clearer, fix inaccuracies etc. This article is also 
related to 
[[http://wiki.apache.org/couchdb/Transaction_model_use_cases|Transaction model 
use cases]] discussion, as it involves multiple document updates.
- 
- As a quick summary, this document explains how to do things that you would 
normally use SQL JOIN for.
- 
- == Why would I need entity relationships? ==
- Imagine you are building a snazzy new web application that includes an 
address book where users can store their contacts. For each contact the user 
stores, you want to capture the contacts name, birthday (which they mustn't 
forget!) their address, telephone number and company they work for. When the 
user wants to add an address, they enter the information in to a form and the 
form saves the information in a model that looks something like this:
- 
- {{{#!highlight javascript
- {
-   "_id":"some unique string that is assigned to the contact",
-   "type":"contact",
-   "name":"contact's name",
-   "birth_day":"a date in string form",
-   "address":"the address in string form (like 1600 Ampitheater Pkwy., 
Mountain View, CA)",
-   "phone_number":"phone number in string form",
-   "company_title":"company title",
-   "company_name":"name of the company",
-   "company_description":"some explanation about the company",
-   "company_address":"the company address in string form"
- }
- }}}
- (Note that ''type'' doesn't mean anything to CouchDB, we're just using it 
here for our own convenience. ''_id'' is the only thing CouchDB looks at)
- 
- That's great, your users immediately begin to use their address book and soon 
the datastore starts to fill up. Not long after the deployment of your new 
application you hear from someone that they are not happy that there is only 
one phone number. What if they want to store someone's work telephone number in 
addition to their home number? No problem you think, you can just add a work 
phone number to your structure. You change your data structure to look more 
like this:
- 
- {{{
-   "phone_number":"home phone in string form",
-   "work_phone_number":"work phone in string form",
- }}}
- Update the form with the new field and you are back in business. Soon after 
redeploying your application, you get a number of new complaints. When they see 
the new phone number field, people start asking for even more fields. Some 
people want a fax number field, others want a mobile field. Some people even 
want more than one mobile field (boy modern life sure is hectic)! You could add 
another field for fax, and another for mobile, maybe two. What about if people 
have three mobile phones? What if they have ten? What if someone invents a 
phone for a place you've never thought of? Your model needs to use 
relationships.
- 
- == One to Many ==
- The answer is to allow users to assign as many phone numbers to each of their 
contacts as they like.
- 
- In CouchDB, there are 2 ways to achieve this.
- 
-  1. Use separate documents
-  1. Use an embedded array
- 
- === One to Many: Separate documents ===
- When using separate documents, you could have documents like this for the 
phone numbers:
- 
- {{{#!highlight javascript
- {
-   "_id":"the phone number",
-   "type":"phone",
-   "contact_id":"id of the contact document that has this phone number",
-   "phone_type":"string describing type of phone, like 
home,work,fax,mobile,..."
- }
- }}}
- (Note the use of the ''_id'' field to store the phone number. Phone numbers 
are unique (when prefixed with country and area code) and therefore this makes 
a great ''natural key'')
- 
- The key to making all this work is the contact property. By storing the 
contact id in it, you can refer to the owning contact in a unique way, since 
''_id'' fields are unique in CouchDB databases.
- 
- Creating the relationship between a contact and one of its phone numbers is 
easy to do. Let's say you have a contact named "Scott" who has a home phone and 
a mobile phone. You populate his contact info like this (using Perl and 
Net::CouchDB):
- 
- {{{#!highlight perl
- $db->insert({type => 'contact', _id => 'Scott', name => 'My Friend Scott'});
- $db->insert({type => 'phone', _id => '(650) 555 - 2200', contact_id => 
'Scott', phone_type => 'home'});
- $db->insert({type => 'phone', _id => '(650) 555 - 2201', contact_id => 

[Couchdb Wiki] Update of "EntityRelationship" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "EntityRelationship" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/EntityRelationship?action=diff=25=26

Comment:
Moved to http://docs.couchdb.org/en/stable/ddocs/views/nosql.html

- Moved to http://docs.couchdb.org/en/stable/ddocs/views/nosql.html
  


[Couchdb Wiki] Update of "FUQ" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "FUQ" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/FUQ?action=diff=12=13

Comment:
Migrated fully to https://docs.couchdb.org/en/latest

- <>
  
- = Frequently Unasked Questions =
- On IRC and the Mailing List, these are the Questions People should have asked 
to help them stay Relaxed.
- 
- <>
- 
- == Documents ==
- 
-  1. What is the benefit of using the _bulk_docs API instead of PUTting single 
documents to CouchDB?
-   . Aside from the HTTP overhead and roundtrip you are saving, the main 
advantage is that CouchDB can handle the B tree updates more efficiently, 
decreasing rewriting of intermediary and parent nodes, both improving speed and 
saving disk space.
- 
-  1. Why can't I use MVCC in CouchDB as a revision control system for my docs?
-   . The revisions CouchDB stores for each document are removed when the 
database is compacted. The database may be compacted at any time by a DB admin 
to save hard drive space. If you were using those revisions for document 
versioning, you'd lose them all upon compaction. In addition, your disk usage 
would grow with every document iteration and (if you prevented database 
compaction) you'd have no way to recover the used disk space.
- 
-  1. Does compaction remove deleted documents’ contents?
-   . We keep the latest revision of every document ever seen, even if that 
revision has '"_deleted":true' in it. This is so that replication can ensure 
eventual consistency between replicas. Not only will all replicas agree on 
which documents are present and which are not, but also the contents of both.
- 
-   . Deleted documents specifically allow for a body to be set in the deleted 
revision. The intention for this is to have a "who deleted this" type of meta 
data for the doc. Some client libraries delete docs by grabbing the current 
object blob, adding a '"_deleted":true' member, and then sending it back which 
inadvertently (in most cases) keeps the last doc body around after compaction.
- 
- == Views ==
-  1. In a view, why should I not {{{emit(key,doc)}}} ?
- 
-   . The key point here is that by emitting {{{,doc}}} you are duplicating the 
document which is already present in the database (a .couch file), and 
including it in the results of the view (a .view file, with similar structure). 
This is the same as having a SQL Index that includes the original table, 
instead of using a foreign key.
- 
-   The same effect can be acheived by using {{{emit(key,null)}}} and 
?include_docs=true with the view request. This approach has the benefit of not 
duplicating the document data in the view index, which reduces the disk space 
consumed by the view. On the other hand, the file access pattern is slightly 
more expensive for CouchDB. It is usually a premature optimization to include 
the document in the view. As always, if you think you may need to emit the 
document it's always best to test.
- 
- == Tools ==
-  1. I decided to roll my own !CouchApp tool or CouchDB client in 
. How cool is that?
- 
-   . Pretty cool! In fact its a great way to get familiar with the API. 
However - wrappers around the HTTP API are not necessarily of great use as 
CouchDB already makes this very easy. Mapping CouchDB semantics onto your 
language's native data structures is much more useful to people. Many languages 
are already covered and we'd really like to see your ideas and enhancements 
incorporated into the existing tools if possible, and helping to keep them up 
to date. Ask on the mailing list about contributing!
- 
- == Log Files ==
-  1. Those Erlang messages in the log are pretty confusing. What gives?
-   . While the Erlang messages in the log can be confusing to someone 
unfamiliar with Erlang, with practice they become very helpful. The CouchDB 
developers do try to catch and log messages that might be useful to a system 
administrator in a friendly format, but occassionally a bug or otherwise 
unexpected behavior manifests itself in more verbose dumps of Erlang server 
state. These messages can be very useful to CouchDB developers. If you find 
many confusing messages in your log, feel free to inquire about them. If they 
are expected, devs can work to ensure that the message is more cleanly 
formatted. Otherwise, the messages may indicate a bug in the code.
-   In many cases, this is enough to identify the problem. For example, OS 
errors are reported as tagged tuples error,enospc or 
error,enoacces which respectively is "You ran out of disk space", and 
"CouchDB doesn't have permission to access that resource". Most of these errors 
are derived from C used to build the Erlang VM and are documented in 
{{{errno.h}}} and related header files. 
[[http://www.ibm.com/developerworks/aix/library/au-errnovariable/|IBM]] 
provides a good introduction to these, and the relevant 

[couchdb] branch feature/database-partition-limits updated (197d5f2 -> cc00b4f)

2018-12-20 Thread davisp
This is an automated email from the ASF dual-hosted git repository.

davisp pushed a change to branch feature/database-partition-limits
in repository https://gitbox.apache.org/repos/asf/couchdb.git.


 discard 197d5f2  Enforce partition size limits
 discard ec14a51  Use an accumulator when merging revision trees
 discard 005b442  Add Elixir tests for database partitions
 discard 718c872  Support partitioned queries in Mango
 discard 004ce09  Use index names when testing index selection
 discard 329f4e3  Optimize offset/limit for partition queries
 discard c5319c4  Optimize all_docs queries in a single partition
 discard 71efe57  Implement partitioned views
 discard d3f508e  Implement `couch_db:get_partition_info/2`
 discard a32d0d6  Implement partitioned dbs
 discard ab806a7  Implement configurable hash functions
 discard 1da3631  Validate design document options more strictly
 discard e943198  Pass the DB record to index validation functions
 discard 92b58ba  Implement `fabric_util:open_cluster_db`
 discard 60d9ee4  Improve `couch_db:clustered_db` flexibility
 discard 3ad082e  Add PSE API to store opaque properties
 add f4195a0  Migrate cluster with(out) quorum js tests as elixir tests 
(#1812)
 add f60f7a1  Suppress credo TODO suggests (#1822)
 add 88dd125  Move fabric streams to a fabric_streams module
 add 632f303  Clean rexi stream workers when coordinator process is killed
 add 7f9d910  Add PSE API to store opaque properties
 add 90b5eee  Improve `couch_db:clustered_db` flexibility
 add 9649dba  Implement `fabric_util:open_cluster_db`
 add 0d7e38f  Pass the DB record to index validation functions
 add 381bb0a  Validate design document options more strictly
 add 1f569b9  Implement configurable hash functions
 add 50de080  Implement partitioned dbs
 add 2db6577  Implement `couch_db:get_partition_info/2`
 add 16e53ee  Implement partitioned views
 add 2463ee3  Optimize all_docs queries in a single partition
 add 021aa7b  Optimize offset/limit for partition queries
 add d609bec  Use index names when testing index selection
 add 6312363  Support partitioned queries in Mango
 add 60bbe3f  Add Elixir tests for database partitions
 new d0a4ac4  Use an accumulator when merging revision trees
 new cc00b4f  Enforce partition size limits

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (197d5f2)
\
 N -- N -- N   refs/heads/feature/database-partition-limits 
(cc00b4f)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 Makefile   |  14 +-
 Makefile.win   |  14 +-
 .../src/couch_replicator_fabric.erl|   4 +-
 src/fabric/src/fabric_streams.erl  | 250 +
 src/fabric/src/fabric_util.erl |  87 ---
 src/fabric/src/fabric_view_all_docs.erl|   4 +-
 src/fabric/src/fabric_view_changes.erl |   4 +-
 src/fabric/src/fabric_view_map.erl |   4 +-
 src/fabric/src/fabric_view_reduce.erl  |   4 +-
 test/elixir/.credo.exs |   7 +-
 test/elixir/test/cluster_with_quorum_test.exs  | 179 +++
 test/elixir/test/cluster_without_quorum_test.exs   | 178 +++
 12 files changed, 648 insertions(+), 101 deletions(-)
 create mode 100644 src/fabric/src/fabric_streams.erl
 create mode 100644 test/elixir/test/cluster_with_quorum_test.exs
 create mode 100644 test/elixir/test/cluster_without_quorum_test.exs



[couchdb] 02/02: Enforce partition size limits

2018-12-20 Thread davisp
This is an automated email from the ASF dual-hosted git repository.

davisp pushed a commit to branch feature/database-partition-limits
in repository https://gitbox.apache.org/repos/asf/couchdb.git

commit cc00b4ff3b7e98ac5cd3ef5e81232384c9b9936d
Author: Paul J. Davis 
AuthorDate: Fri Dec 14 11:06:03 2018 -0600

Enforce partition size limits

This limit helps prevent users from inadvertently misusing partitions by
refusing to add documents when the size of a partition exceeds 10GiB.

Co-authored-by: Robert Newson 
---
 rel/overlay/etc/default.ini|  5 +++
 src/chttpd/src/chttpd.erl  |  3 ++
 src/couch/src/couch_db_updater.erl | 81 --
 3 files changed, 85 insertions(+), 4 deletions(-)

diff --git a/rel/overlay/etc/default.ini b/rel/overlay/etc/default.ini
index a77add4..ae9d313 100644
--- a/rel/overlay/etc/default.ini
+++ b/rel/overlay/etc/default.ini
@@ -64,6 +64,11 @@ default_engine = couch
 ; move deleted databases/shards there instead. You can then manually delete
 ; these files later, as desired.
 ;enable_database_recovery = false
+;
+; Set the maximum size allowed for a partition. This helps users avoid
+; inadvertently abusing partitions resulting in hot shards. The default
+; is 10GiB. A value of 0 or less will disable partition size checks.
+;max_partition_size = 10737418240
 
 [couchdb_engines]
 ; The keys in this section are the filename extension that
diff --git a/src/chttpd/src/chttpd.erl b/src/chttpd/src/chttpd.erl
index 2f241cd..6558b1e 100644
--- a/src/chttpd/src/chttpd.erl
+++ b/src/chttpd/src/chttpd.erl
@@ -873,6 +873,9 @@ error_info(conflict) ->
 {409, <<"conflict">>, <<"Document update conflict.">>};
 error_info({conflict, _}) ->
 {409, <<"conflict">>, <<"Document update conflict.">>};
+error_info({partition_overflow, DocId}) ->
+Descr = <<"'", DocId/binary, "' exceeds partition limit">>,
+{403, <<"partition_overflow">>, Descr};
 error_info({{not_found, missing}, {_, _}}) ->
 {409, <<"not_found">>, <<"missing_rev">>};
 error_info({forbidden, Error, Msg}) ->
diff --git a/src/couch/src/couch_db_updater.erl 
b/src/couch/src/couch_db_updater.erl
index 95508e2..00fee90 100644
--- a/src/couch/src/couch_db_updater.erl
+++ b/src/couch/src/couch_db_updater.erl
@@ -21,6 +21,7 @@
 -include("couch_db_int.hrl").
 
 -define(IDLE_LIMIT_DEFAULT, 61000).
+-define(DEFAULT_MAX_PARTITION_SIZE, 16#28000). % 10 GiB
 
 
 -record(merge_acc, {
@@ -28,7 +29,8 @@
 merge_conflicts,
 add_infos = [],
 rem_seqs = [],
-cur_seq
+cur_seq,
+full_partitions = []
 }).
 
 
@@ -466,13 +468,22 @@ merge_rev_trees([], [], Acc) ->
 merge_rev_trees([NewDocs | RestDocsList], [OldDocInfo | RestOldInfo], Acc) ->
 #merge_acc{
 revs_limit = Limit,
-merge_conflicts = MergeConflicts
+merge_conflicts = MergeConflicts,
+full_partitions = FullPartitions
 } = Acc,
 
 % Track doc ids so we can debug large revision trees
 erlang:put(last_id_merged, OldDocInfo#full_doc_info.id),
 NewDocInfo0 = lists:foldl(fun({Client, NewDoc}, OldInfoAcc) ->
-merge_rev_tree(OldInfoAcc, NewDoc, Client, MergeConflicts)
+NewInfo = merge_rev_tree(OldInfoAcc, NewDoc, Client, MergeConflicts),
+case is_overflowed(NewInfo, OldInfoAcc, FullPartitions) of
+true when not MergeConflicts ->
+DocId = NewInfo#doc.id,
+send_result(Client, NewDoc, {partition_overflow, DocId}),
+OldInfoAcc;
+false ->
+NewInfo
+end
 end, OldDocInfo, NewDocs),
 NewDocInfo1 = maybe_stem_full_doc_info(NewDocInfo0, Limit),
 % When MergeConflicts is false, we updated #full_doc_info.deleted on every
@@ -595,6 +606,16 @@ merge_rev_tree(OldInfo, NewDoc, _Client, true) ->
 {NewTree, _} = couch_key_tree:merge(OldTree, NewTree0),
 OldInfo#full_doc_info{rev_tree = NewTree}.
 
+is_overflowed(_New, _Old, []) ->
+false;
+is_overflowed(Old, Old, _FullPartitions) ->
+false;
+is_overflowed(New, Old, FullPartitions) ->
+Partition = couch_partition:from_docid(New#full_doc_info.id),
+NewSize = estimate_size(New),
+OldSize = estimate_size(Old),
+lists:member(Partition, FullPartitions) andalso NewSize > OldSize.
+
 maybe_stem_full_doc_info(#full_doc_info{rev_tree = Tree} = Info, Limit) ->
 case config:get_boolean("couchdb", "stem_interactive_updates", true) of
 true ->
@@ -617,13 +638,31 @@ update_docs_int(Db, DocsList, LocalDocs, MergeConflicts, 
FullCommit) ->
 (Id, not_found) ->
 #full_doc_info{id=Id}
 end, Ids, OldDocLookups),
+
+%% Get the list of full partitions
+FullPartitions = case couch_db:is_partitioned(Db) of
+true ->
+case max_partition_size() of
+N when N =< 0 ->
+[];
+Max ->
+Partitions = lists:usort(lists:map(fun(Id) ->
+

[couchdb] 01/02: Use an accumulator when merging revision trees

2018-12-20 Thread davisp
This is an automated email from the ASF dual-hosted git repository.

davisp pushed a commit to branch feature/database-partition-limits
in repository https://gitbox.apache.org/repos/asf/couchdb.git

commit d0a4ac42d450055222ea4e475848f7c35c17ad88
Author: Paul J. Davis 
AuthorDate: Fri Dec 14 10:31:02 2018 -0600

Use an accumulator when merging revision trees

This cleans up the `couch_db_updater:merge_rev_trees/7` to instead use
an accumulator argument.
---
 src/couch/src/couch_db_updater.erl | 57 --
 1 file changed, 43 insertions(+), 14 deletions(-)

diff --git a/src/couch/src/couch_db_updater.erl 
b/src/couch/src/couch_db_updater.erl
index c0974aa..95508e2 100644
--- a/src/couch/src/couch_db_updater.erl
+++ b/src/couch/src/couch_db_updater.erl
@@ -23,6 +23,15 @@
 -define(IDLE_LIMIT_DEFAULT, 61000).
 
 
+-record(merge_acc, {
+revs_limit,
+merge_conflicts,
+add_infos = [],
+rem_seqs = [],
+cur_seq
+}).
+
+
 init({Engine, DbName, FilePath, Options0}) ->
 erlang:put(io_priority, {db_update, DbName}),
 update_idle_limit_from_config(),
@@ -450,11 +459,18 @@ doc_tag(#doc{meta=Meta}) ->
 Else -> throw({invalid_doc_tag, Else})
 end.
 
-merge_rev_trees(_Limit, _Merge, [], [], AccNewInfos, AccRemoveSeqs, AccSeq) ->
-{ok, lists:reverse(AccNewInfos), AccRemoveSeqs, AccSeq};
-merge_rev_trees(Limit, MergeConflicts, [NewDocs|RestDocsList],
-[OldDocInfo|RestOldInfo], AccNewInfos, AccRemoveSeqs, AccSeq) ->
-erlang:put(last_id_merged, OldDocInfo#full_doc_info.id), % for debugging
+merge_rev_trees([], [], Acc) ->
+{ok, Acc#merge_acc{
+add_infos = lists:reverse(Acc#merge_acc.add_infos)
+}};
+merge_rev_trees([NewDocs | RestDocsList], [OldDocInfo | RestOldInfo], Acc) ->
+#merge_acc{
+revs_limit = Limit,
+merge_conflicts = MergeConflicts
+} = Acc,
+
+% Track doc ids so we can debug large revision trees
+erlang:put(last_id_merged, OldDocInfo#full_doc_info.id),
 NewDocInfo0 = lists:foldl(fun({Client, NewDoc}, OldInfoAcc) ->
 merge_rev_tree(OldInfoAcc, NewDoc, Client, MergeConflicts)
 end, OldDocInfo, NewDocs),
@@ -475,22 +491,25 @@ merge_rev_trees(Limit, MergeConflicts, 
[NewDocs|RestDocsList],
 end,
 if NewDocInfo2 == OldDocInfo ->
 % nothing changed
-merge_rev_trees(Limit, MergeConflicts, RestDocsList, RestOldInfo,
-AccNewInfos, AccRemoveSeqs, AccSeq);
+merge_rev_trees(RestDocsList, RestOldInfo, Acc);
 true ->
 % We have updated the document, give it a new update_seq. Its
 % important to note that the update_seq on OldDocInfo should
 % be identical to the value on NewDocInfo1.
 OldSeq = OldDocInfo#full_doc_info.update_seq,
 NewDocInfo3 = NewDocInfo2#full_doc_info{
-update_seq = AccSeq + 1
+update_seq = Acc#merge_acc.cur_seq + 1
 },
 RemoveSeqs = case OldSeq of
-0 -> AccRemoveSeqs;
-_ -> [OldSeq | AccRemoveSeqs]
+0 -> Acc#merge_acc.rem_seqs;
+_ -> [OldSeq | Acc#merge_acc.rem_seqs]
 end,
-merge_rev_trees(Limit, MergeConflicts, RestDocsList, RestOldInfo,
-[NewDocInfo3|AccNewInfos], RemoveSeqs, AccSeq+1)
+NewAcc = Acc#merge_acc{
+add_infos = [NewDocInfo3 | Acc#merge_acc.add_infos],
+rem_seqs = RemoveSeqs,
+cur_seq = Acc#merge_acc.cur_seq + 1
+},
+merge_rev_trees(RestDocsList, RestOldInfo, NewAcc)
 end.
 
 merge_rev_tree(OldInfo, NewDoc, Client, false)
@@ -599,8 +618,18 @@ update_docs_int(Db, DocsList, LocalDocs, MergeConflicts, 
FullCommit) ->
 #full_doc_info{id=Id}
 end, Ids, OldDocLookups),
 % Merge the new docs into the revision trees.
-{ok, NewFullDocInfos, RemSeqs, _} = merge_rev_trees(RevsLimit,
-MergeConflicts, DocsList, OldDocInfos, [], [], UpdateSeq),
+AccIn = #merge_acc{
+revs_limit = RevsLimit,
+merge_conflicts = MergeConflicts,
+add_infos = [],
+rem_seqs = [],
+cur_seq = UpdateSeq
+},
+{ok, AccOut} = merge_rev_trees(DocsList, OldDocInfos, AccIn),
+#merge_acc{
+add_infos = NewFullDocInfos,
+rem_seqs = RemSeqs
+} = AccOut,
 
 % Write out the document summaries (the bodies are stored in the nodes of
 % the trees, the attachments are already written to disk)



[couchdb] branch feature/database-partitions updated (005b442 -> 60bbe3f)

2018-12-20 Thread davisp
This is an automated email from the ASF dual-hosted git repository.

davisp pushed a change to branch feature/database-partitions
in repository https://gitbox.apache.org/repos/asf/couchdb.git.


omit 005b442  Add Elixir tests for database partitions
omit 718c872  Support partitioned queries in Mango
omit 004ce09  Use index names when testing index selection
omit 329f4e3  Optimize offset/limit for partition queries
omit c5319c4  Optimize all_docs queries in a single partition
omit 71efe57  Implement partitioned views
omit d3f508e  Implement `couch_db:get_partition_info/2`
omit a32d0d6  Implement partitioned dbs
omit ab806a7  Implement configurable hash functions
omit 1da3631  Validate design document options more strictly
omit e943198  Pass the DB record to index validation functions
omit 92b58ba  Implement `fabric_util:open_cluster_db`
omit 60d9ee4  Improve `couch_db:clustered_db` flexibility
omit 3ad082e  Add PSE API to store opaque properties
 add f4195a0  Migrate cluster with(out) quorum js tests as elixir tests 
(#1812)
 add f60f7a1  Suppress credo TODO suggests (#1822)
 add 88dd125  Move fabric streams to a fabric_streams module
 add 632f303  Clean rexi stream workers when coordinator process is killed
 add 7f9d910  Add PSE API to store opaque properties
 add 90b5eee  Improve `couch_db:clustered_db` flexibility
 add 9649dba  Implement `fabric_util:open_cluster_db`
 add 0d7e38f  Pass the DB record to index validation functions
 add 381bb0a  Validate design document options more strictly
 add 1f569b9  Implement configurable hash functions
 add 50de080  Implement partitioned dbs
 add 2db6577  Implement `couch_db:get_partition_info/2`
 add 16e53ee  Implement partitioned views
 add 2463ee3  Optimize all_docs queries in a single partition
 add 021aa7b  Optimize offset/limit for partition queries
 add d609bec  Use index names when testing index selection
 add 6312363  Support partitioned queries in Mango
 add 60bbe3f  Add Elixir tests for database partitions

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (005b442)
\
 N -- N -- N   refs/heads/feature/database-partitions (60bbe3f)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 Makefile   |  14 +-
 Makefile.win   |  14 +-
 .../src/couch_replicator_fabric.erl|   4 +-
 src/fabric/src/fabric_streams.erl  | 250 +
 src/fabric/src/fabric_util.erl |  87 ---
 src/fabric/src/fabric_view_all_docs.erl|   4 +-
 src/fabric/src/fabric_view_changes.erl |   4 +-
 src/fabric/src/fabric_view_map.erl |   4 +-
 src/fabric/src/fabric_view_reduce.erl  |   4 +-
 test/elixir/.credo.exs |   7 +-
 test/elixir/test/cluster_with_quorum_test.exs  | 179 +++
 test/elixir/test/cluster_without_quorum_test.exs   | 178 +++
 12 files changed, 648 insertions(+), 101 deletions(-)
 create mode 100644 src/fabric/src/fabric_streams.erl
 create mode 100644 test/elixir/test/cluster_with_quorum_test.exs
 create mode 100644 test/elixir/test/cluster_without_quorum_test.exs



[couchdb] branch master updated (f60f7a1 -> 632f303)

2018-12-20 Thread vatamane
This is an automated email from the ASF dual-hosted git repository.

vatamane pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/couchdb.git.


from f60f7a1  Suppress credo TODO suggests (#1822)
 new 88dd125  Move fabric streams to a fabric_streams module
 new 632f303  Clean rexi stream workers when coordinator process is killed

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../src/couch_replicator_fabric.erl|   4 +-
 src/fabric/src/fabric_streams.erl  | 251 +
 src/fabric/src/fabric_util.erl |  88 
 src/fabric/src/fabric_view_all_docs.erl|   4 +-
 src/fabric/src/fabric_view_changes.erl |   4 +-
 src/fabric/src/fabric_view_map.erl |   4 +-
 src/fabric/src/fabric_view_reduce.erl  |   4 +-
 7 files changed, 261 insertions(+), 98 deletions(-)
 create mode 100644 src/fabric/src/fabric_streams.erl



[couchdb] 02/02: Clean rexi stream workers when coordinator process is killed

2018-12-20 Thread vatamane
This is an automated email from the ASF dual-hosted git repository.

vatamane pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/couchdb.git

commit 632f303a47bd89a97c831fd0532cb7541b80355d
Author: Nick Vatamaniuc 
AuthorDate: Thu Dec 20 12:19:01 2018 -0500

Clean rexi stream workers when coordinator process is killed

Sometimes fabric coordinators end up getting brutally terminated [1], and 
in that
case they might never process their `after` clause where their remote rexi
workers are killed. Those workers are left lingering around keeping 
databases
active for up to 5 minutes at a time.

To prevent that from happening, let coordinators which use streams spawn an
auxiliary cleaner process. This process will monitor the main coordinator 
and
if it dies will ensure remote workers are killed, freeing resources
immediately. In order not to send 2x the number of kill messages during the
normal exit, fabric_util:cleanup() will stop the auxiliary process before
continuing.

[1] One instance is when the ddoc cache is refreshed:
 
https://github.com/apache/couchdb/blob/master/src/ddoc_cache/src/ddoc_cache_entry.erl#L236
---
 src/fabric/src/fabric_streams.erl | 132 ++
 1 file changed, 132 insertions(+)

diff --git a/src/fabric/src/fabric_streams.erl 
b/src/fabric/src/fabric_streams.erl
index 32217c3..ae0c2be 100644
--- a/src/fabric/src/fabric_streams.erl
+++ b/src/fabric/src/fabric_streams.erl
@@ -22,6 +22,9 @@
 -include_lib("mem3/include/mem3.hrl").
 
 
+-define(WORKER_CLEANER, fabric_worker_cleaner).
+
+
 start(Workers, Keypos) ->
 start(Workers, Keypos, undefined, undefined).
 
@@ -32,6 +35,7 @@ start(Workers0, Keypos, StartFun, Replacements) ->
 start_fun = StartFun,
 replacements = Replacements
 },
+spawn_worker_cleaner(self(), Workers0),
 Timeout = fabric_util:request_timeout(),
 case rexi_utils:recv(Workers0, Keypos, Fun, Acc, Timeout, infinity) of
 {ok, #stream_acc{workers=Workers}} ->
@@ -47,6 +51,16 @@ start(Workers0, Keypos, StartFun, Replacements) ->
 
 
 cleanup(Workers) ->
+% Stop the auxiliary cleaner process as we got to the point where cleanup
+% happesn in the regular fashion so we don't want to send 2x the number 
kill
+% messages
+case get(?WORKER_CLEANER) of
+CleanerPid when is_pid(CleanerPid) ->
+erase(?WORKER_CLEANER),
+exit(CleanerPid, kill);
+_ ->
+ok
+end,
 fabric_util:cleanup(Workers).
 
 
@@ -72,6 +86,7 @@ handle_stream_start({rexi_EXIT, Reason}, Worker, St) ->
 {value, {_Range, WorkerReplacements}, NewReplacements} ->
 FinalWorkers = lists:foldl(fun(Repl, NewWorkers) ->
 NewWorker = (St#stream_acc.start_fun)(Repl),
+add_worker_to_cleaner(self(), NewWorker),
 fabric_dict:store(NewWorker, waiting, NewWorkers)
 end, Workers, WorkerReplacements),
 % Assert that our replaced worker provides us
@@ -117,3 +132,120 @@ handle_stream_start({ok, ddoc_updated}, _, St) ->
 
 handle_stream_start(Else, _, _) ->
 exit({invalid_stream_start, Else}).
+
+
+% Spawn an auxiliary rexi worker cleaner. This will be used in cases
+% when the coordinator (request) process is forceably killed and doesn't
+% get a chance to process its `after` fabric:clean/1 clause.
+spawn_worker_cleaner(Coordinator, Workers) ->
+case get(?WORKER_CLEANER) of
+undefined ->
+Pid = spawn(fun() ->
+erlang:monitor(process, Coordinator),
+cleaner_loop(Coordinator, Workers)
+end),
+put(?WORKER_CLEANER, Pid),
+Pid;
+ ExistingCleaner ->
+ExistingCleaner
+   end.
+
+
+cleaner_loop(Pid, Workers) ->
+receive
+{add_worker, Pid, Worker} ->
+cleaner_loop(Pid, [Worker | Workers]);
+{'DOWN', _, _, Pid, _} ->
+fabric_util:cleanup(Workers)
+end.
+
+
+add_worker_to_cleaner(CoordinatorPid, Worker) ->
+case get(?WORKER_CLEANER) of
+CleanerPid when is_pid(CleanerPid) ->
+CleanerPid ! {add_worker, CoordinatorPid, Worker};
+_ ->
+ok
+end.
+
+
+
+-ifdef(TEST).
+
+-include_lib("eunit/include/eunit.hrl").
+
+worker_cleaner_test_() ->
+{
+"Fabric spawn_worker_cleaner test", {
+setup, fun setup/0, fun teardown/1,
+fun(_) -> [
+should_clean_workers(),
+does_not_fire_if_cleanup_called(),
+should_clean_additional_worker_too()
+] end
+}
+}.
+
+
+should_clean_workers() ->
+?_test(begin
+meck:reset(rexi),
+erase(?WORKER_CLEANER),
+Workers = [
+#shard{node = 'n1', ref = make_ref()},
+#shard{node = 'n2', ref = make_ref()}
+],
+   

[couchdb] 01/02: Move fabric streams to a fabric_streams module

2018-12-20 Thread vatamane
This is an automated email from the ASF dual-hosted git repository.

vatamane pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/couchdb.git

commit 88dd1255595b513ff778e5efa4b2399aa3ccb570
Author: Nick Vatamaniuc 
AuthorDate: Thu Dec 20 12:11:10 2018 -0500

Move fabric streams to a fabric_streams module

Streams functionality is fairly isolated from the rest of the utils module 
so
move it to its own. This is mostly in preparation to add a streams workers
cleaner process.
---
 .../src/couch_replicator_fabric.erl|   4 +-
 src/fabric/src/fabric_streams.erl  | 119 +
 src/fabric/src/fabric_util.erl |  88 ---
 src/fabric/src/fabric_view_all_docs.erl|   4 +-
 src/fabric/src/fabric_view_changes.erl |   4 +-
 src/fabric/src/fabric_view_map.erl |   4 +-
 src/fabric/src/fabric_view_reduce.erl  |   4 +-
 7 files changed, 129 insertions(+), 98 deletions(-)

diff --git a/src/couch_replicator/src/couch_replicator_fabric.erl 
b/src/couch_replicator/src/couch_replicator_fabric.erl
index 6998b28..1650105 100644
--- a/src/couch_replicator/src/couch_replicator_fabric.erl
+++ b/src/couch_replicator/src/couch_replicator_fabric.erl
@@ -27,12 +27,12 @@ docs(DbName, Options, QueryArgs, Callback, Acc) ->
Shards, couch_replicator_fabric_rpc, docs, [Options, QueryArgs]),
 RexiMon = fabric_util:create_monitors(Workers0),
 try
-case fabric_util:stream_start(Workers0, #shard.ref) of
+case fabric_streams:start(Workers0, #shard.ref) of
 {ok, Workers} ->
 try
 docs_int(DbName, Workers, QueryArgs, Callback, Acc)
 after
-fabric_util:cleanup(Workers)
+fabric_streams:cleanup(Workers)
 end;
 {timeout, NewState} ->
 DefunctWorkers = fabric_util:remove_done_workers(
diff --git a/src/fabric/src/fabric_streams.erl 
b/src/fabric/src/fabric_streams.erl
new file mode 100644
index 000..32217c3
--- /dev/null
+++ b/src/fabric/src/fabric_streams.erl
@@ -0,0 +1,119 @@
+% Licensed under the Apache License, Version 2.0 (the "License"); you may not
+% use this file except in compliance with the License. You may obtain a copy of
+% the License at
+%
+%   http://www.apache.org/licenses/LICENSE-2.0
+%
+% Unless required by applicable law or agreed to in writing, software
+% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+% License for the specific language governing permissions and limitations under
+% the License.
+
+-module(fabric_streams).
+
+-export([
+start/2,
+start/4,
+cleanup/1
+]).
+
+-include_lib("fabric/include/fabric.hrl").
+-include_lib("mem3/include/mem3.hrl").
+
+
+start(Workers, Keypos) ->
+start(Workers, Keypos, undefined, undefined).
+
+start(Workers0, Keypos, StartFun, Replacements) ->
+Fun = fun handle_stream_start/3,
+Acc = #stream_acc{
+workers = fabric_dict:init(Workers0, waiting),
+start_fun = StartFun,
+replacements = Replacements
+},
+Timeout = fabric_util:request_timeout(),
+case rexi_utils:recv(Workers0, Keypos, Fun, Acc, Timeout, infinity) of
+{ok, #stream_acc{workers=Workers}} ->
+true = fabric_view:is_progress_possible(Workers),
+AckedWorkers = fabric_dict:fold(fun(Worker, From, WorkerAcc) ->
+rexi:stream_start(From),
+[Worker | WorkerAcc]
+end, [], Workers),
+{ok, AckedWorkers};
+Else ->
+Else
+end.
+
+
+cleanup(Workers) ->
+fabric_util:cleanup(Workers).
+
+
+handle_stream_start({rexi_DOWN, _, {_, NodeRef}, _}, _, St) ->
+case fabric_util:remove_down_workers(St#stream_acc.workers, NodeRef) of
+{ok, Workers} ->
+{ok, St#stream_acc{workers=Workers}};
+error ->
+Reason = {nodedown, <<"progress not possible">>},
+{error, Reason}
+end;
+
+handle_stream_start({rexi_EXIT, Reason}, Worker, St) ->
+Workers = fabric_dict:erase(Worker, St#stream_acc.workers),
+Replacements = St#stream_acc.replacements,
+case {fabric_view:is_progress_possible(Workers), Reason} of
+{true, _} ->
+{ok, St#stream_acc{workers=Workers}};
+{false, {maintenance_mode, _Node}} when Replacements /= undefined ->
+% Check if we have replacements for this range
+% and start the new workers if so.
+case lists:keytake(Worker#shard.range, 1, Replacements) of
+{value, {_Range, WorkerReplacements}, NewReplacements} ->
+FinalWorkers = lists:foldl(fun(Repl, NewWorkers) ->
+NewWorker = (St#stream_acc.start_fun)(Repl),
+fabric_dict:store(NewWorker, waiting, NewWorkers)
+  

[Couchdb Wiki] Update of "CouchHack_April_2009" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "CouchHack_April_2009" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/CouchHack_April_2009?action=diff=17=18

Comment:
https://cwiki.apache.org/confluence/display/COUCHDB/CouchHack+April+2009

- CouchHack is small CouchDB hacker event planned for April 19th - 22nd in 
Asheville NC.
  
- We will be renting a house for hacking, playing video games, building sofa 
forts and crashing. Anyone who wants to hack on or with CouchDB is welcome to 
come. Beds in the house are limited, first come first serve
- 
- Current Hackers:
- 
-  * Jan Lehnardt - attending 19th - 21. Interested in linking against Mac OS X 
ICU, Erlang View Server, partial replication -- Teaching CouchDB
-  * J. Chris Anderson - attending 19 - 22 - Interested in Partitioning / 
Clustering, and p2p messaging CouchApp development.
-  * Damien Katz - attending 19 - 22. Interested in JSearch/FT Indexing 
support, and working on third party apps.
-  * Paul Davis - attending 19 - 22. External indexing integration. Another 
project TBA.
-  * Benjamin Young - attending the 20th (at least). Interested in hosting, 
management, and CouchDB as a CMS content repo replacement for MySQL (et al).
-  * Brad Anderson - attending 19 - 21. Partitioning / Clustering, Erlang View 
Server
- 
- We will mostly be working on CouchDB related stuff, some of it core to 
CouchDB, some of it external projects involving CouchDB.
- 
- Want to influence CouchDB but can't come? Consider becoming a CouchHack 
sponsor and help pay for the house or travel expenses.
- 
- If you are interested in hacking or sponsoring, contact Jan Lehnardt 
j...@apache.org.
- 
- List of [[CouchHack_April_2009_Sponsors]].
- 
- === CouchDB Work Done at CouchHack
- 
-  * Split `main.js` out into lots of little file. (And then Jan taught me how 
Makefile work)
-  * Adding a batch PUT mode (which delays commit so that we can do a bulk 
index update)
- 


[Couchdb Wiki] Update of "CouchCamp2010" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "CouchCamp2010" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/CouchCamp2010?action=diff=9=10

Comment:
Migrated to https://cwiki.apache.org/confluence/display/COUCHDB/CouchCamp+2010

- All info about CouchCamp is accessible here:
  
- http://www.couch.io/couchcamp
- 
- A few speakers have been invited to speak but the majority of the time will 
be "unconference" style group participation and discussion.
- 
- = Unconference Whiteboard =
- 
- this space is in intended to serve as a space to flush out ideas for group 
discussions. people should feel free to add comments and questions to any 
section.
- 
- === CouchDB on mobile ===
- 
- What phones/devices are/can be targeted?
- Alternate implementations or ports of the current Apache (erlang) CouchDB?
- 
- === HTML5 CouchDB ===
- 
- IndexedDatabase and IDBCouch.
- WebStorage.
- 
- === CouchDB and the web security model ===
- 
- What do we need to do to allow CouchApps to play in the web sandbox. Eg if 
I'm running a banking CouchApp and a chat CouchApp, and the chat CouchApp has a 
bug that allows the person you are chatting with to inject HTML/JS into your 
browser, how do we keep that from being a vector for attack on your banking 
data?
- 
- === Ideas and priorities for CouchDB 2.0 ===
- 
- Now that we're releasing 1.0, it's time to think about what the next 5 years 
of CouchDB development will bring. Alternate indexers? Binary attachment 
storage options? Refactoring to use web-machine? Of course, real decisions will 
be made on the dev@ list, but there's nothing like a campfire and some beers to 
get the ideas flowing.
- 
-* Replaceable storage (e.g., use ets for transient database documents)
- 
-* Realtime interactions with live db or filtered replica
-   * Filtered replication to diskless copy (data discovery via filtering)
-   * Browsing JSON document structure to discern data patterns 
(auto-documentation)
-   * Statistics of db contents (interactive mapreduce)
-   * Sandbox of real document subset
- * iterative code development
- * testing
- * error reproduction
- 
-* A model of error handling with the goal of conveying information to the 
user
- 
-* Command-line JSON-aware tools tuned to Couch interfaces
- 
- === GeoCouch and why it rules ===
- 
- Once you add some location information to your documents, you can ask your 
GeoCouch to give you a list of documents that are located in an area. Let's 
talk about how to integrate geo data into your applications, and why bringing 
CouchDB to the GIS world is important.
- 
- === All about BigCouch ===
- 
- [[http://github.com/cloudant/bigcouch|BigCouch]] is a highly-available 
clustering and sharding system for CouchDB built by the folks at Cloudant.  
We'll talk about the design choices behind the project, demo a local Big``Couch 
cluster, and chat about where the project is headed.
- 
- === How to contribute to CouchDB ===
- 
- Want to help with CouchDB, but need some hints on getting started? We'll give 
a tour of the codebase and show you how to find and fix bugs in the Erlang 
implementation. Also you can do a lot to help CouchDB, just by writing 
JavaScript, so we'll show you that too. Hopefully this will demystify the 
codebase, and you can start hacking on CouchDB.
- 


[Couchdb Wiki] Update of "CouchIn15Minutes" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "CouchIn15Minutes" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/CouchIn15Minutes?action=diff=7=8

Comment:
See http://docs.couchdb.org/en/stable/intro/index.html

- <>
  
- = Couch DB Quick Start =
- 
- (Tested with 0.9.0 on [[http://www.ubuntu/org'|Ubuntu]], wikified from my 
[[http://www.jroller.com/robertburrelldonkin/entry/couchdb_in_15_minutes|blog]])
- 
- == Install (very basic) ==
- 1. [[http://couchdb.apache.org/downloads.html|Download]], unpackage and cd to 
the directory
-  1. Read the README then follow the instructions (for Unbuntu, use 
[[http://dbpedia.org/page/Debian|Debian]])
-  1. (Ubuntu) Remember to apt-get the require libraries before building
-  1. Start Couch from the command line and check everything looks good
- 
- == Create a new Database ==
- 1. Create new database
-  1. Browse http://localhost:5984/_utils/
-  1. Click "Create Database"
-  1. Enter "example"
- 
- == "Hello, World!" (of course) ==
- 1. Now for "Hello, World!"
-  1. Couch is RESTful so you'll need a HTTP client. These instructions are for 
telnet (those who dislike the command line could use 
[[http://localhost:5984/_utils/database.html?example/_design_docs|futon]] or, 
if you're using Mac OS X, [[http://ditchnet.org/httpclient/|HTTPClient]]).
-  1. Type: {{{$ telnet localhost 5984}}}
-  1. Response: {{{
- Trying 127.0.0.1...
- Connected to localhost.
- Escape character is '^]'.}}}
-  1. CutNPaste: {{{
- PUT /example/some_doc_id HTTP/1.0
- Content-Length: 29
- Content-Type: application/json
- 
- {"greetings":"Hello, World!"} }}}
-  1. Response: {{{
- HTTP/1.0 201 Created
- Server: CouchDB/0.9.0 (Erlang OTP/R12B)
- Etag: "1-518824332"
- Date: Wed, 24 Jun 2009 13:33:11 GMT
- Content-Type: text/plain;charset=utf-8
- Content-Length: 51
- Cache-Control: must-revalidate
- 
- {"ok":true,"id":"some_doc_id","rev":"1-518824332"}
- Connection closed by foreign host.}}}
-  1. Browse http://localhost:5984/example/some_doc_id to see {{{
- {"_id":"some_doc_id","_rev":"1-518824332","greetings":"Hello, World!"} }}}
- 
- == Document creation recap ==
- 1.Huh?
-  1. Couch is a RESTful so to create a document PUT (as above) or POST
-  1. Couch uses a JSON API. So PUT a document as JSON and GET results as JSON
-  1. To view the data, use a view (Doh!)
-  1. Each document has a unique "_id"
-  1. Each document is versioned with a "_rev"
- 
- == Create a View and...view it ==
- 1. Relax and take a look at the view
-  1. (Well, actually I'm going to use a "show" but it'll demonstrate the 
flavour)
-  1. Again {{{
- $ telnet localhost 5984
- Trying 127.0.0.1...
- Connected to localhost.
- Escape character is '^]'.
- PUT /example/_design/render HTTP/1.0
- Content-Length: 77
- Content-Type: application/json
- 
- {"shows" : {"salute" : "function(doc, req) {return {body: doc.greetings}}"}} 
}}}
-  1. Response: {{{
- HTTP/1.0 201 Created
- Server: CouchDB/0.9.0 (Erlang OTP/R12B)
- Etag: "1-2041852709"
- Date: Wed, 01 Jul 2009 06:08:59 GMT
- Content-Type: text/plain;charset=utf-8
- Content-Length: 55
- Cache-Control: must-revalidate
- 
- {"ok":true,"id":"_design/render","rev":"1-2041852709"}
- Connection closed by foreign host. }}}
-  1. Browse 
http://localhost:5984/example/_design/render/_show/salute/some_doc_id
- 
- == Summary of what a View is and does ==
- 1. What Just Happened?
-  1. A "show" directly renders a document using JavaScript
-  1. "Shows" are added to a design document (in this case "/_design/render" 
via the "shows" property)
-  1. "body: doc.greetings" fills the response body with the "greetings" 
property
-  1. GET _design/render/_show/salute/some_doc_id to use the "salute" show to 
render the "some_doc_id" document added above
- 


[Couchdb Wiki] Update of "CouchHack_April_2009_Sponsors" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "CouchHack_April_2009_Sponsors" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/CouchHack_April_2009_Sponsors?action=diff=4=5

Comment:
Migrated to 
https://cwiki.apache.org/confluence/display/COUCHDB/CouchHack+April+2009

- Big thanks to the CouchHack April 2009 Sponsors!
  
- 
[[http://www.bigbluehat.com/|{{http://wiki.apache.org/couchdb-data/attachments/CouchHack_April_2009_Sponsors/attachments/bigbluehat.png}}]]
- <>'''web manufacturing company'''
- 
- We are impressed by CouchDB's use of "pure" web technologies, and are 
considering CouchDB for a future backend for our 
[[http://www.blueinkcms.com/|BlueInk]] Content Management System.
- 
- 
[[http://cybernetics.hudora.biz/|{{http://i.hdimg.net/480x320/MF35ZEBY4Z5RVWBCZ47O3PMEWSLGPZK701.jpeg}}]]
- <>'''Hudora Cybernetics'''
- 
- We run [[http://blogs.23.nu/c0re/topics/couchdb/|some]] major infrastructure 
in our company with CouchDB backends - e.g. the image above is served from a 
CouchDB instance. We are gratefull to the superb CouchDB team and wish them 
happy hacking!
- 
- (If you are a sponsor, feel free to add yourself with any promotional links 
you see fit).
- 


[Couchdb Wiki] Update of "CouchHack" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "CouchHack" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/CouchHack?action=diff=2=3

Comment:
Migrated to 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=39621775=contextnavchildmode

- CouchHack is a CouchDB hacker event.
  
-  * [[CouchHack_April_2009]]
- 


[Couchdb Wiki] Update of "CouchDB_meetups" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "CouchDB_meetups" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/CouchDB_meetups?action=diff=3=4

Comment:
https://cwiki.apache.org/confluence/display/COUCHDB/Meetups

- This page has moved to: 
https://cwiki.apache.org/confluence/display/COUCHDB/Meetups
  


[Couchdb Wiki] Update of "ConfiguringDistributedSystems" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "ConfiguringDistributedSystems" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/ConfiguringDistributedSystems?action=diff=6=7

Comment:
Migrated to http://docs.couchdb.org/en/stable/cluster/index.html

- #redirect Configuring_distributed_systems
- This is a stub for a page to discuss how to actually get couchdb running in a 
distributed fashion.
  
- Distributed CouchDB implementations:
- 
-  * CouchDB currently scales for reads, by allowing synchronization between 
multiple servers.
- 
-  * CouchDB does not currently support partitioning.
- 
- (couch-dev post from Jan Lehnardt - July 2008)
- {{{
- At the moment, CouchDB runs best on a single machine
- with multiple machines for a cluster using replication to
- synchronise data. Erlang allows a VM to run on multiple
- machines and we do not yet take advantage of that fact.
- This is an area that is worth investigating.
- 
- The road map is at http://incubator.apache.org/couchdb/roadmap.html
- 
- ... scaling parts are Future Feature work.
- A couple of people have voiced interest in contributing there
- especially the database partitioning, but nothing has come
- out of that yet.
- }}}
- 
- == Editorial Notes ==
- 
-  * I see that there is replication via the 'replication' functionality on the 
http://localhost:5984/_utils console interface, but how does one distribute a 
database across, say 10 hosts?
-  * Is there a way to specify the number of copies of a piece of data?  
(Presumes not all hosts have copies of each piece of data)
-  * Is there a piece of this that can be configured in the couch.ini file, 
such than when the topology changes (ie. server add or removal) that things can 
be put back into sync?
- 
- Excerpts from the Architectural Document, 
http://incubator.apache.org/couchdb/docs/overview.html :
- 
- {{{
- Using just the basic replication model, many traditionally single server 
database applications can be made distributed with almost no extra work.
- }}}
- 
-  * Let's try to document this.  What do we mean by '''distributed'''?
- 
- === Distributed defined ===
- 
- Here's what some people might ''assume'' we mean by distributed data store:
- 
-  * We (couchdb) have a client which will '''shard''' the data by key, and 
direct it to the correct server (shard), such that the writes of the system 
will '''scale'''.  That is that there are many ''writers'', in a collision-free 
update environment.
-  * Reads may scale if they outnumber the writes using some form of 
replication for read-only-clients.
-  * If a master data store node is lost, then the client (or some proxy 
mechanism) can switch over to a new master data store, which is ''really up to 
date'' (ie. milliseconds), and the client will continue without a hitch.
- 


[Couchdb Wiki] Update of "Configuring_distributed_systems" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "Configuring_distributed_systems" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/Configuring_distributed_systems?action=diff=4=5

Comment:
CouchDB 2.x is distributed; see 
http://docs.couchdb.org/en/stable/cluster/index.html

- <>
  
- This is a stub for a page to discuss how to actually get couchdb running in a 
distributed fashion.
- 
- Distributed CouchDB implementations:
- 
-  * CouchDB currently scales for reads, by allowing synchronization between 
multiple servers.
- 
-  * CouchDB does not currently support partitioning.
- 
- (couch-dev post from Jan Lehnardt - July 2008)
- {{{
- At the moment, CouchDB runs best on a single machine
- with multiple machines for a cluster using replication to
- synchronise data. Erlang allows a VM to run on multiple
- machines and we do not yet take advantage of that fact.
- This is an area that is worth investigating.
- 
- The road map is at 
https://issues.apache.org/jira/browse/COUCHDB?report=com.atlassian.jira.plugin.system.project:roadmap-panel
- 
- ... scaling parts are Future Feature work.
- A couple of people have voiced interest in contributing there
- especially the database partitioning, but nothing has come
- out of that yet.
- }}}
- 
- == Editorial Notes ==
- 
-  * I see that there is replication via the 'replication' functionality on the 
http://localhost:5984/_utils console interface, but how does one distribute a 
database across, say 10 hosts?
-  * Is there a way to specify the number of copies of a piece of data?  
(Presumes not all hosts have copies of each piece of data)
-  * Is there a piece of this that can be configured in the couch.ini file, 
such than when the topology changes (ie. server add or removal) that things can 
be put back into sync?
- 
- Excerpts from the Architectural Document, 
http://incubator.apache.org/couchdb/docs/overview.html :
- 
- {{{
- Using just the basic replication model, many traditionally single server 
database applications can be made distributed with almost no extra work.
- }}}
- 
-  * Let's try to document this.  What do we mean by '''distributed'''?
- 
- === Distributed defined ===
- 
- Here's what some people might ''assume'' we mean by distributed data store:
- 
-  * We (couchdb) have a client which will '''shard''' the data by key, and 
direct it to the correct server (shard), such that the writes of the system 
will '''scale'''.  That is that there are many ''writers'', in a collision-free 
update environment.
-  * Reads may scale if they outnumber the writes using some form of 
replication for read-only-clients.
-  * If a master data store node is lost, then the client (or some proxy 
mechanism) can switch over to a new master data store, which is ''really up to 
date'' (ie. milliseconds), and the client will continue without a hitch.
- 


[Couchdb Wiki] Update of "ErlInitDebug" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "ErlInitDebug" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/ErlInitDebug?action=diff=1=2

Comment:
Migrated to http://docs.couchdb.org/en/stable/install/troubleshooting.html

- If CouchDB doesn't start up early on in it's launch, it's useful for 
developers to see the messy details of what is going on.
  
- Firstly ensure that CouchDB is set to log at debug level in your system's 
{{{local.ini}}}:
- 
- {{{
- [log]
- level = debug
- }}}
- 
- Then you'll need to know where your platform stores its couchdb libraries, 
easiest by checking output of {{{grep ERL_LIBS `which couchdb`}}} and adapting 
this as required. You should only run these commands as the user account that 
CouchDB will actually run as, to avoid breaking permissions for subsequent runs:
- 
- {{{
- erl -env ERL_LIBS 
$ERL_LIBS:/usr/local/Cellar/couchdb/1.3.x/lib/couchdb/erlang/lib -couch_ini 
-init_debug -emu_args -smp auto +W i +v -s couch
- }}}
- 
- The output will be similar to this, for a working system:
- 
- {{{
- Executing: /usr/local/Cellar/erlang/R16B/lib/erlang/erts-5.10.1/bin/beam.smp 
/usr/local/Cellar/erlang/R16B/lib/erlang/erts-5.10.1/bin/beam.smp -W i -v -- 
-root /usr/local/Cellar/erlang/R16B/lib/erlang -progname erl -- -home 
/Users/dch -- -couch_ini -init_debug -smp auto -s couch
- 
- warning: -v (only in debug compiled code)
- {progress,preloaded}
- {progress,kernel_load_completed}
- {progress,modules_loaded}
- {start,heart}
- {start,error_logger}
- {start,application_controller}
- {progress,init_kernel_started}
- {apply,{application,load,[{application,stdlib,[{description,"ERTS  CXC 138 
10"},{vsn,"1.19.1"},{id,[]},{modules,[array,base64,beam_lib,binary,c,calendar,dets,dets_server,dets_sup,dets_utils,dets_v8,dets_v9,dict,digraph,digraph_utils,edlin,edlin_expand,epp,eval_bits,erl_bits,erl_compile,erl_eval,erl_expand_records,erl_internal,erl_lint,erl_parse,erl_posix_msg,erl_pp,erl_scan,erl_tar,error_logger_file_h,error_logger_tty_h,escript,ets,file_sorter,filelib,filename,gb_trees,gb_sets,gen,gen_event,gen_fsm,gen_server,io,io_lib,io_lib_format,io_lib_fread,io_lib_pretty,lib,lists,log_mf_h,math,ms_transform,orddict,ordsets,otp_internal,pg,pool,proc_lib,proplists,qlc,qlc_pt,queue,random,re,sets,shell,shell_default,slave,sofs,string,supervisor,supervisor_bridge,sys,timer,unicode,win32reg,zip]},{registered,[timer_server,rsh_starter,take_over_monitor,pool_master,dets]},{applications,[kernel]},{included_applications,[]},{env,[]},{maxT,infinity},{maxP,infinity}]}]}}
- {progress,applications_loaded}
- {apply,{application,start_boot,[kernel,permanent]}}
- Erlang R16B (erts-5.10.1) [source] [64-bit] [smp:8:8] [async-threads:10] 
[hipe] [kernel-poll:false] [dtrace]
- 
- {apply,{application,start_boot,[stdlib,permanent]}}
- {apply,{c,erlangrc,[]}}
- {progress,started}
- Eshell V5.10.1  (abort with ^G)
- 1>
- =PROGRESS REPORT 21-Mar-2013::10:29:05 ===
-   supervisor: {local,sasl_safe_sup}
-  started: [{pid,<0.48.0>},
-{name,alarm_handler},
-{mfargs,{alarm_handler,start_link,[]}},
-{restart_type,permanent},
-{shutdown,2000},
-{child_type,worker}]
- 
- =PROGRESS REPORT 21-Mar-2013::10:29:05 ===
-   supervisor: {local,sasl_safe_sup}
-  started: [{pid,<0.49.0>},
-{name,overload},
-{mfargs,{overload,start_link,[]}},
-{restart_type,permanent},
-{shutdown,2000},
-{child_type,worker}]
- 
- =PROGRESS REPORT 21-Mar-2013::10:29:05 ===
-   supervisor: {local,sasl_sup}
-  started: [{pid,<0.47.0>},
-{name,sasl_safe_sup},
-{mfargs,
-{supervisor,start_link,
-[{local,sasl_safe_sup},sasl,safe]}},
-{restart_type,permanent},
-{shutdown,infinity},
-{child_type,supervisor}]
- 
- =PROGRESS REPORT 21-Mar-2013::10:29:05 ===
-   supervisor: {local,sasl_sup}
-  started: [{pid,<0.50.0>},
-{name,release_handler},
-{mfargs,{release_handler,start_link,[]}},
-{restart_type,permanent},
-{shutdown,2000},
-{child_type,worker}]
- 
- =PROGRESS REPORT 21-Mar-2013::10:29:05 ===
-  application: sasl
-   started_at: nonode@nohost
- 
- =PROGRESS REPORT 21-Mar-2013::10:29:05 ===
-   supervisor: {local,inets_sup}
-  started: [{pid,<0.56.0>},
-{name,ftp_sup},
-{mfargs,{ftp_sup,start_link,[]}},
-{restart_type,permanent},
-

[Couchdb Wiki] Update of "FrequentlyAskedQuestions" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "FrequentlyAskedQuestions" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/FrequentlyAskedQuestions?action=diff=21=22

Comment:
Migrated to https://docs.couchdb.org/

- #redirect Frequently_asked_questions
- ## page was renamed from FAQ
- ## page was renamed from Faq
- A handy FAQ for all your CouchDB related questions.
  
-   * [[#what_is_couchdb|What is CouchDB?]]
-   * [[#is_couchdb_ready_for_production|Is CouchDB Ready for Production?]]
-   * [[#what_does_couch_mean|What Does Couch Mean?]]
-   * [[#what_language|What Language is CouchDB Written in?]]
-   * [[#what_platform|What Platforms are Supported?]]
-   * [[#what_license|What is the License?]]
-   * [[#how_much_stuff|How Much Stuff can I Store in CouchDB?]]
-   * [[#how_sequences|How Do I Do Sequences?]]
-   * [[#how_replication|How Do I Use Replication?]]
-   * [[#how_spread_load|How can I spread load across multiple nodes?]]
-   * [[#how_fast_views|How Fast are CouchDB Views?]]
-   * [[#why_no_mnesia|Why Does CouchDB Not Use Mnesia?]]
-   * [[#i_can_has_no_http|Can I talk to CouchDB without going through the HTTP 
API?]]
-   * [[#update_views_more_often|I want to update my view indexes more often 
than only when a user reads it. How do I do that best?]]
-   * [[#secure_remote_server|I use CouchDB on a remote server and I don't want 
it to listen on a public port for security reasons. Is there a way to connect 
to it from my local machine or can I still use Futon with it?]]
-   * [[#slow_view_building|Creating my view index takes ages, WTF?]]
- 
- If you have a question not yet answered in this FAQ please hit the edit 
button and add your question at the end. Check back in a few days, someone may 
have provided an answer.
- 
- <>
- == What is CouchDB? ==
- 
- CouchDB is a document-oriented, Non-Relational Database Management Server 
(NRDBMS). The 
[[http://incubator.apache.org/couchdb/docs/intro.html|Introduction]] and 
[[http://incubator.apache.org/couchdb/docs/overview.html|Overview]] provide a 
high level overview of the CouchDB system.
- 
- <>
- == Is CouchDB Ready for Production? ==
- 
- Alpha Release. CouchDB has not yet reached version 1.0. There will likely be 
data-storage format changes and incompatible HTTP API changes between now and 
1.0. However, there are projects successful using CouchDB in a variety of 
contexts. See InTheWild for a partial list of projects using CouchDB.
- 
- <>
- == What Does Couch Mean? ==
- 
- It's an acronym, Cluster Of Unreliable Commodity Hardware. This is a 
statement of Couch's long term goals of massive scalablility and high 
reliability on fault-prone hardware. The distributed nature and flat address 
space of the database will enable node partitioning for storage scalabilty 
(with a map/reduce style query facility) and clustering for reliability and 
fault tolerance.
- 
- <>
- == What Language is CouchDB Written in? ==
- 
- Erlang, a concurrent, functional programming language with an emphasis on 
fault tolerance. Early work on CouchDB was started in C but was replaced by 
Erlang OTP platform. Erlang has so far proven an excellent match for this 
project.
- 
- CouchDB's default view server uses Mozilla's Spidermonkey Javscript library 
which is written in C. It also supports easy integration of view servers 
written in any language.
- 
- <>
- == What Platforms are Supported? ==
- 
- Most POSIX systems, this includes GNU/Linux and OS X.
- 
- Windows is not officially supported but it should work, please let us know.
- 
- <>
- == What is the License? ==
- 
- [[http://www.apache.org/licenses/LICENSE-2.0.html|Apache 2.0]]
- 
- <>
- == How Much Stuff can I Store in CouchDB? ==
- 
- With node partitioning, virtually unlimited. For a single database instance, 
the practical scaling limits aren't yet known.
- 
- <>
- == How Do I Do Sequences? ==
- 
- Or, where is my AUTO_INCREMENT?! With replication sequences are hard to 
realize. Sequences are often used to ensure unique identifiers for each row in 
a database table. CouchDB generates unique ids from its own and you can specify 
your own as well, so you don't really need a sequence here. If you use a 
sequence for something else, you might find a way to express in CouchDB in 
another way.
- 
- <>
- == How Do I Use Replication? ==
- 
- {{{
- POST /_replicate?source=$source_database=$target_database
- }}}
- 
- Where $source_database and $target_database can be the names of local 
database or full URIs of remote databases. Both databases need to be created 
before they can be replicated from or to.
- 
- <>
- == How can I spread load across multiple nodes? ==
- 
- Using an http proxy like nginx, you can load balance GETs across nodes, and 
direct all POSTs, PUTs and DELETEs to a master node. CouchDB's triggered 
replication facility can keep multiple read-only servers in sync with a single 
master server, so by replicating from master -> slaves 

[Couchdb Wiki] Update of "AndroidOtpPatch" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "AndroidOtpPatch" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/AndroidOtpPatch?action=diff=6=7

Comment:
Moved to 
https://cwiki.apache.org/confluence/display/COUCHDB/Old+Android+Compile+Info

- For use with [[Installing_on_Android]]
  
- {{{
- diff --git a/erts/emulator/Makefile.in b/erts/emulator/Makefile.in
- index fb8d718..4ed25c9 100644
- --- a/erts/emulator/Makefile.in
- +++ b/erts/emulator/Makefile.in
- @@ -352,6 +352,7 @@ EMULATOR_EXECUTABLE = beam$(TF_MARKER).dll
-  else
-  ifeq ($(CC), agcc)
-  EMULATOR_EXECUTABLE = libbeam$(TF_MARKER).so
- +EMULATOR_EXECUTABLE_REG = beam$(TF_MARKER)
-  else
-  EMULATOR_EXECUTABLE = beam$(TF_MARKER)
-  endif
- @@ -374,7 +375,11 @@ ifeq ($(FLAVOR)-@ERTS_BUILD_SMP_EMU@,smp-no)
-  all:
-   @echo '*** Omitted build of emulator with smp support'
-  else
- +ifeq ($(CC), agcc)
- +all: generate erts_lib zlib pcre $(BINDIR)/$(EMULATOR_EXECUTABLE) 
$(BINDIR)/$(EMULATOR_EXECUTABLE_REG) $(UNIX_ONLY_BUILDS)
- +else
-  all: generate erts_lib zlib pcre $(BINDIR)/$(EMULATOR_EXECUTABLE) 
$(UNIX_ONLY_BUILDS)
- +endif
-  ifeq ($(OMIT_OMIT_FP),yes)
-   @echo '* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *'
-   @echo '* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *'
- @@ -453,6 +458,7 @@ release_spec: all
-   $(INSTALL_DATA) $(RELEASE_INCLUDES) $(RELEASE_PATH)/usr/include
-   $(INSTALL_DATA) $(RELEASE_INCLUDES) $(RELSYSDIR)/include
-   $(INSTALL_PROGRAM) $(BINDIR)/$(EMULATOR_EXECUTABLE) $(RELSYSDIR)/bin
- + $(INSTALL_PROGRAM) $(BINDIR)/$(EMULATOR_EXECUTABLE_REG) $(RELSYSDIR)/bin
-  ifeq ($(ERLANG_OSTYPE), unix)
-   $(INSTALL_PROGRAM) $(BINDIR)/$(CS_EXECUTABLE) $(RELSYSDIR)/bin
-  endif
- @@ -1013,6 +1019,13 @@ ifeq ($(CC), agcc)
-  $(BINDIR)/$(EMULATOR_EXECUTABLE): $(INIT_OBJS) $(OBJS) $(DEPLIBS)
-   $(PURIFY) $(LD) -o $(BINDIR)/$(EMULATOR_EXECUTABLE) \
-   $(HIPEBEAMLDFLAGS) $(LDFLAGS) $(DEXPORT) $(INIT_OBJS) $(OBJS) $(LIBS) 
-shared
- +
- +$(OBJDIR)/beam.o:
- + $(CC) $(CFLAGS) $(INCLUDES) -c beam/beam.c -o $(OBJDIR)/beam.o
- +
- +$(BINDIR)/$(EMULATOR_EXECUTABLE_REG): $(OBJDIR)/beam.o
- + $(PURIFY) $(LD) -o $(BINDIR)/$(EMULATOR_EXECUTABLE_REG) \
- + $(HIPEBEAMLDFLAGS) $(LDFLAGS) $(DEXPORT) $(OBJDIR)/beam.o $(LIBS) 
-L$(BINDIR) -lbeam
-  else
-  $(BINDIR)/$(EMULATOR_EXECUTABLE): $(INIT_OBJS) $(OBJS) $(DEPLIBS)
-   $(PURIFY) $(LD) -o $(BINDIR)/$(EMULATOR_EXECUTABLE) \
- diff --git a/erts/emulator/sys/unix/erl_child_setup.c 
b/erts/emulator/sys/unix/erl_child_setup.c
- index 7c6e4a2..c1a1549 100644
- --- a/erts/emulator/sys/unix/erl_child_setup.c
- +++ b/erts/emulator/sys/unix/erl_child_setup.c
- @@ -116,7 +116,11 @@ main(int argc, char *argv[])
-   execv(argv[CS_ARGV_NO_OF_ARGS],&(argv[CS_ARGV_NO_OF_ARGS + 1]));
-   }
-  } else {
- +#ifdef ANDROID_ARM
- + execl("/system/bin/sh", "sh", "-c", argv[CS_ARGV_CMD_IX], (char *) 
NULL);
- +#else
-   execl("/bin/sh", "sh", "-c", argv[CS_ARGV_CMD_IX], (char *) NULL);
- +#endif
-  }
-  return 1;
-  }
- diff --git a/erts/emulator/sys/unix/sys.c b/erts/emulator/sys/unix/sys.c
- index 31ab5d0..9a260a2 100644
- --- a/erts/emulator/sys/unix/sys.c
- +++ b/erts/emulator/sys/unix/sys.c
- @@ -1539,7 +1539,11 @@ static ErlDrvData spawn_start(ErlDrvPort port_num, 
char* name, SysDriverOpts* op
-   }
-   }
-   } else {
- +#ifdef ANDROID_ARM
- + execle("/system/bin/sh", "sh", "-c", cmd_line, (char *) NULL, 
new_environ);
- +#else
-   execle("/bin/sh", "sh", "-c", cmd_line, (char *) NULL, 
new_environ);
- +#endif
-   }
-   child_error:
-   _exit(1);
- @@ -1660,7 +1664,12 @@ static ErlDrvData spawn_start(ErlDrvPort port_num, 
char* name, SysDriverOpts* op
-   fcntl(i, F_SETFD, 1);
-  
-  qnx_spawn_options.flags = _SPAWN_SETSID;
- +#ifdef ANDROID_ARM
- +/* Are we really in QNX?  Then we don't need this special case here... */
- +if ((pid = spawnl(P_NOWAIT, "/system/bin/sh", "/system/bin/sh", "-c", 
cmd_line, 
- +#else
-  if ((pid = spawnl(P_NOWAIT, "/bin/sh", "/bin/sh", "-c", cmd_line, 
- +#endif
-(char *) 0)) < 0) {
-   erts_free(ERTS_ALC_T_TMP, (void *) cmd_line);
-  reset_qnx_spawn();
- diff --git a/erts/emulator/beam/beam.c b/erts/emulator/beam/beam.c
- new file mode 100644
- index 000..167b96e
- --- /dev/null
- +++ b/erts/emulator/beam/beam.c
- @@ -0,0 +1,2 @@
- +void erl_start(int argc, char** argv);
- +int main(int argc, char** argv) { erl_start(argc, argv); }
- diff --git a/lib/crypto/c_src/Makefile.in b/lib/crypto/c_src/Makefile.in
- index 0b39808..5d9658e 100644
- --- a/lib/crypto/c_src/Makefile.in
- +++ b/lib/crypto/c_src/Makefile.in
- @@ -108,7 +108,7 @@ $(OBJDIR)/%.o: %.c
-  
-  $(LIBDIR)/crypto_drv.so: $(OBJS)
-   $(INSTALL_DIR) $(LIBDIR) 
- - $(LD) $(LDFLAGS) -o 

[Couchdb Wiki] Update of "ApplicationSchema" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "ApplicationSchema" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/ApplicationSchema?action=diff=4=5

Comment:
Better resource: http://guide.couchdb.org/

- A simple example of a CouchDB application schema.
  
- The way CouchDB works is still a bit cryptic for many of us coming from the 
RDBMS world. So the idea of this page is to write the schema of a typical 
application the RDBMS way and rewrite it the Document oriented way. I was 
thinking about a simple blog application:
- 
-   * Users can post many articles
-   * Articles can have many comments
-   * Articles can have many tags and tags can have many articles
- 
- == RDBMS Schema ==
- 
-   * Users (''id'', ''name'', ''password'')
-   * Articles (''id'', ''title'', ''body'', ''posted_on'', ''user_id'') 
(''user_id'' being a foreign key)
-   * Comments (''id'', ''title'', ''body'', ''posted_on'', ''user_id'', 
''article_id') (''user_id'' and ''article_id'' being foreign keys)
-   * Tags (''id'', ''name'')
-   * tags_associations(''id'', ''article_id'', ''tag_id'', ''user_id'')
- 
- == Relevant Threads ==
- 
- This part needs to be expanded by people who better understand the subject.
- 
- This thread describes how to manage comments:
- 
-   * 
[[http://groups.google.com/group/couchdb/browse_thread/thread/84cf1bb522d1fbbf|Question/Concern:
 Document Sizes]]
- 
- These threads describe how to manage tags:
- 
-   * 
[[http://groups.google.com/group/couchdb/browse_thread/thread/84cf1bb522d1fbbf/c9bf8e95e421b675?lnk=gst=patcito=3#c9bf8e95e421b675|Question/Concern:
 Document Sizes]]
-   * 
[[http://groups.google.com/group/couchdb/browse_thread/thread/a11521505db8eb3c/07c29b951c4707e1#07c29b951c4707e1|Basic
 Application Schema Example for the wiki]]
- 
- == CouchDB Schema ==
- 
- These examples use the database ''blog''.
- 
- === Data Storage ===
- 
-  Users 
- 
- {{{
- _id: autogenerated_user_id1
- type: "user"
- name: "J. D. Citizen"
- password: "qwerty"
- }}}
- 
-  Articles 
- 
- {{{
- _id: autogenerated_article_id1
- user_id: autogenerated_user_id1
- type: "article"
- title: "RDBMSs SUCK OMG"
- body: "I thought what I'd do was, I'd pretend I was one of those deaf-
- mutes."
- created: "Fri Oct 12 04:46:58 +1000 2007"
- comments: comment_id1, comment_id2, etc
- tags: [ "rdbms", "things that suck" ]
- }}}
- 
-  Comments 
- 
- {{{
- _id: autogenerated_comment_id1
- article_id: autogenerated_article_id1
- user_id: autogenerated_user_id2
- type: "comment"
- title: "i liek ur blog"
- body: "i liek ur blog a lot LOL. please ad me on my space. clik hear for hot 
girl action LOL"
- created: "Fri Oct 12 04:46:59 +1000 2007"
- }}}
- 
- Bear in mind that due to the flat nature of the document storage that there 
are no "tables" to do a SELECT from. Most of the real power of CouchDB is in 
it's views.
- 
- === Views ===
- 
- Views are be stored into design documents for grouping and giving a sense of 
structure. Design documents are regular documents in CouchDB, it knows they are 
design documents because you preface the document's ''_id'' attr with 
''_design/''.
- 
- Let's add a group of views for our tags. Everybody wants to be able to fetch 
all documents tagged ''rdbms'' so lets add a view that will get us there.
- 
-  _design/tags 
- 
- {{{
- "views": {
-"to_docs": "
-   function(doc) {
- for( var i=0; i < doc.tags.length; i++) {
-   emit(doc.tags[i], null);
- }
-   }"
- }
- }}}
- 
- This view would be accessed by going to ''/blog/_design/tags/to_docs''. The 
returned content would look like:
- 
- {{{
- {"view":"_design/tags/to_docs_inc","total_rows":9, "offset":0,
-  "rows":[
-   {"_id":"autogenerated_article_id1","_rev":"684343246","key":"things that 
suck"},
-   {"_id":"autogenerated_article_id1","_rev":"684343246","key":"rdbms"}
- }}}
- 
- Note, 2 rows returned for the same article. This is intended. CouchDB will 
let us filter the data in our views a bit with extra query parameters. Going to 
''/blog/_design/tags/to_docs?key="rdbms"'' yields only documents tagged 
''rdbms''.
- 
- Say we're returning a lot of records here and want to reduce them by 
including the title (or if no title exists, the name) attribute of the thing 
that was tagged along with the results. Change the following line in the 
''to_docs'' query above:
- 
- {{{
- // emit(doc.tags[i], null);
- // becomes
- emit(doc.tags[i], doc.title || doc.name);
- }}}
- 
- Optionally, this view could be named to_docs_with_title so as to be available 
only if the extra data is desired.
- 
- Using the sample view given above many other views would likely be 
implemented. I give only a few sample URLs as follows, implementation is left 
as an exercise:
- 
-   * /blog/_design/comments/on_article?key="autogenerated_article_id1"
-   * /blog/_design/comments/by_user?key="autogenerated_user_id1"
-   * 

[Couchdb Wiki] Update of "AndroidEnv" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "AndroidEnv" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/AndroidEnv?action=diff=1=2

Comment:
Moved to 
https://cwiki.apache.org/confluence/display/COUCHDB/Old+Android+Compile+Info

- For use with [[Installing_on_Android]].
  
- This script simply sets up a good environment for building with Android.
- 
- {{{
- paths="
- $HOME/software/android/scripts
- $HOME/software/android/other/apk
- $HOME/software/android/sdk/platform-tools
- $HOME/software/android/sdk/platforms/android-8/tools
- 
$HOME/software/android/sdk/sources/prebuilt/linux-x86/toolchain/arm-eabi-4.4.0/bin
- $HOME/software/android/sdk/tools
- "
- 
- for i in $paths
- do
- if [[ -d $i ]]; then
- PATH=$PATH:$i
- fi
- done
- 
- export PATH
- 
- # Java stuff for building Android from sources
- echo "For building Android from sources:"
- echo "$ export JAVA_HOME=$HOME/software/jdk1.5.0_22"
- echo "$ export ANDROID_JAVA_HOME=\$JAVA_HOME"
- echo "$ export PATH=\$JAVA_HOME/bin:$PATH"
- echo "$ cd $HOME/software/android/sdk/sources"
- echo "$ source build/envsetup.sh"
- echo "$ lunch"
- echo "$ make"
- }}}
- 


[Couchdb Wiki] Update of "AndroidReleasePatch" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "AndroidReleasePatch" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/AndroidReleasePatch?action=diff=7=8

Comment:
Moved to 
https://cwiki.apache.org/confluence/display/COUCHDB/Old+Android+Compile+Info

- For use with [[Installing_on_Android]]
  
- {{{
- diff -ru 
release.bak/sdcard/Android/data/com.your.namespace/couchdb/bin/couchdb 
release/sdcard/Android/data/com.your.namespace/couchdb/bin/couchdb
- --- release.bak/sdcard/Android/data/com.your.namespace/couchdb/bin/couchdb
2011-02-05 01:26:00.0 -0700
- +++ release/sdcard/Android/data/com.your.namespace/couchdb/bin/couchdb
2011-02-08 16:42:00.0 -0700
- @@ -12,6 +12,9 @@
-  # License for the specific language governing permissions and limitations 
under
-  # the License.
-  
- +export HOME=/data/data/com.your.namespace
- +export 
LD_LIBRARY_PATH=$HOME/erlang/erts-5.7.5/bin:$HOME/couchdb/bin:$HOME/couchdb/lib/couchdb/bin
- +export PATH=$HOME/erlang/bin:$HOME/couchdb/bin:$PATH
-  BACKGROUND=false
-  
DEFAULT_CONFIG_DIR=/sdcard/Android/data/com.your.namespace/couchdb/etc/couchdb/default.d
-  
DEFAULT_CONFIG_FILE=/sdcard/Android/data/com.your.namespace/couchdb/etc/couchdb/default.ini
- @@ -222,8 +225,8 @@
-  touch $PID_FILE
-  interactive_option="+Bd -noinput"
-  fi
- -
command="/home/matt/projects/couch/android-build/couchdb/../otp/bootstrap/bin/erl
 $interactive_option $ERL_START_OPTIONS \
- --env ERL_LIBS 
/sdcard/Android/data/com.your.namespace/couchdb/lib/couchdb/erlang/lib 
-couch_ini $start_arguments -s couch"
- +command="erl $interactive_option $ERL_START_OPTIONS \
- +-env ERL_LIBS 
/data/data/com.your.namespace/couchdb/lib/couchdb/erlang/lib -couch_ini 
$start_arguments -s couch"
-  if test "$BACKGROUND" = "true" -a "$RECURSED" = "false"; then
-  $0 $background_start_arguments -b -r $RESPAWN_TIMEOUT -p $PID_FILE \
-  -o $STDOUT_FILE -e $STDERR_FILE -R &
- diff -ru 
release.bak/sdcard/Android/data/com.your.namespace/couchdb/etc/couchdb/local.ini
 release/sdcard/Android/data/com.your.namespace/couchdb/etc/couchdb/local.ini
- --- 
release.bak/sdcard/Android/data/com.your.namespace/couchdb/etc/couchdb/local.ini
  2011-02-05 01:26:00.0 -0700
- +++ 
release/sdcard/Android/data/com.your.namespace/couchdb/etc/couchdb/local.ini
  2011-02-08 15:58:30.0 -0700
- @@ -5,23 +5,30 @@
-  ; overwritten on server upgrade.
-  
-  [couchdb]
- +database_dir = 
/sdcard/Android/data/com.your.namespace/couchdb/var/lib/couchdb
- +view_index_dir = 
/sdcard/Android/data/com.your.namespace/couchdb/var/lib/couchdb
- +util_driver_dir = /data/data/com.your.namespace/couchdb/lib/couchdb
-  ;max_document_size = 4294967296 ; bytes
- +uri_file = 
/sdcard/Android/data/com.your.namespace/couchdb/var/lib/couchdb/couch.uri
-  
-  [httpd]
- -;port = 5984
- -;bind_address = 127.0.0.1
- +port = 5999
- +bind_address = 127.0.0.1
-  ; Uncomment next line to trigger basic-auth popup on unauthorized requests.
-  ;WWW-Authenticate = Basic realm="administrator"
-  
- +[log]
- +file = 
/sdcard/Android/data/com.your.namespace/couchdb/var/log/couchdb/couch.log
- +level = debug
- +
-  [couch_httpd_auth]
-  ; If you set this to true, you should also uncomment the WWW-Authenticate 
line
-  ; above. If you don't configure a WWW-Authenticate header, CouchDB will send
-  ; Basic realm="server" in order to prevent you getting logged out.
-  ; require_valid_user = false
-  
- -[log]
- -;level = debug
- -
- +[query_servers]
- +javascript = /data/data/com.your.namespace/couchdb/bin/couchjs_wrapper 
/data/data/com.your.namespace/couchdb/share/couchdb/server/main.js
-  
-  ; To enable Virtual Hosts in CouchDB, add a vhost = path directive. All 
requests to
-  ; the Virual Host will be redirected to the path. In the example below all 
requests
- diff -ruN 
release.bak/sdcard/Android/data/com.your.namespace/couchdb/bin/couchjs 
release/sdcard/Android/data/com.your.namespace/couchdb/bin/couchjs
- --- release.bak/sdcard/Android/data/com.your.namespace/couchdb/bin/couchjs
2011-02-05 01:26:00.0 -0700
- +++ release/sdcard/Android/data/com.your.namespace/couchdb/bin/couchjs
2011-02-05 01:37:12.0 -0700
- @@ -1,4 +1,4 @@
- -#! /bin/sh -e
- +#!/system/bin/sh -e
-  
-  # Licensed under the Apache License, Version 2.0 (the "License"); you may not
-  # use this file except in compliance with the License. You may obtain a copy 
of
- @@ -63,7 +63,7 @@
-  }
-  
-  run_couchjs () {
- -exec 
/sdcard/Android/data/com.your.namespace/couchdb/lib/couchdb/bin/couchjs $@
- +exec 
LD_LIBRARY_PATH=/data/data/com.your.namespace/couchdb/lib/couchdb/bin 
/data/data/com.your.namespace/couchdb/lib/couchdb/bin/couchjs $@
-  }
-  
-  parse_script_option_list () {
- diff -ruN 
release.bak/sdcard/Android/data/com.your.namespace/couchdb/bin/couchjs_wrapper 

[Couchdb Wiki] Update of "AndroidCouchPatch" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "AndroidCouchPatch" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/AndroidCouchPatch?action=diff=4=5

Comment:
Moved to 
https://cwiki.apache.org/confluence/display/COUCHDB/Old+Android+Compile+Info

- For use with [[Installing_on_Android]]
  
- {{{
- diff --git a/configure.ac b/configure.ac
- index c609a08..a31bc7b 100644
- --- a/configure.ac
- +++ b/configure.ac
- @@ -118,6 +118,21 @@ Is the Mozilla SpiderMonkey library installed?])])])])])
-  
-  AC_SUBST(JS_LIB_BASE)
-  
- +AC_CHECK_LIB([$JS_LIB_BASE], [JS_FreezeObject],
- +AC_DEFINE([HAVE_JS_FREEZE_OBJECT], [1], [Define whether we have 
JS_FreezeObject]))
- +
- +AC_CHECK_LIB([$JS_LIB_BASE], [JS_NewGlobalObject],
- +AC_DEFINE([HAVE_JS_NEW_GLOBAL_OBJECT], [1], [Define whether we have 
JS_NewGlobalObject]))
- +
- +AC_CHECK_LIB([$JS_LIB_BASE], [js_fgets],
- +AC_DEFINE([HAVE_JS_FGETS], [1], [Define whether js_fgets is available to 
use]))
- +
- +AC_CHECK_LIB([$JS_LIB_BASE], [JS_GetStringCharsAndLength],
- +AC_DEFINE([HAVE_JS_GET_STRING_CHARS_AND_LENGTH], [1], [Define whether we 
have JS_GetStringCharsAndLength]))
- +
- +AC_CHECK_LIB([$JS_LIB_BASE], [JS_NewCompartmentAndGlobalObject],
- +AC_DEFINE([HAVE_COMPARTMENTS], [1], [Define whether we have 
JS_NewCompartmentAndGlobalObject]))
- +
-  if test x${IS_WINDOWS} = xTRUE; then
-  if test -f "$JS_LIB_DIR/$JS_LIB_BASE.dll"; then
-  # seamonkey 1.7- build layout on Windows
- @@ -189,6 +204,13 @@ AC_COMPILE_IFELSE(
-  CFLAGS="$OLD_CFLAGS"
-  AC_LANG_POP(C)
-  
- +AC_ARG_WITH([android], [AC_HELP_STRING([--with-android=PATH]
- +[set Android system build path])],[
- +ICU_CONFIG="" 
- +ICU_LOCAL_CFLAGS="-I$withval/external/icu4c/common 
-I$withval/external/icu4c/i18n"
- +ICU_LOCAL_LDFLAGS="-L$withval/out/target/product/generic/system/lib"
- +ICU_LOCAL_BIN=
- +], [
-  AC_ARG_WITH([win32-icu-binaries], 
[AC_HELP_STRING([--with-win32-icu-binaries=PATH],
-  [set PATH to the Win32 native ICU binaries directory])], [
-  ICU_CONFIG="" # supposed to be a command to query options...
- @@ -200,13 +222,19 @@ AC_ARG_WITH([win32-icu-binaries], 
[AC_HELP_STRING([--with-win32-icu-binaries=PAT
-  ICU_LOCAL_CFLAGS=`$ICU_CONFIG --cppflags-searchpath`
-  ICU_LOCAL_LDFLAGS=`$ICU_CONFIG --ldflags-searchpath`
-  ICU_LOCAL_BIN=
- -])
- +])])
-  
-  AC_SUBST(ICU_CONFIG)
-  AC_SUBST(ICU_LOCAL_CFLAGS)
-  AC_SUBST(ICU_LOCAL_LDFLAGS)
-  AC_SUBST(ICU_LOCAL_BIN)
-  
- +AC_ARG_WITH([android-curl], [AC_HELP_STRING([--with-android-curl=PATH]
- +[set PATH to directory where curl is built for android])], [
- +CURL_CFLAGS="-I$withval/include -DCURL_STATICLIB"
- +CURL_LIBDIR="$withval/lib"
- +CURL_LDFLAGS="-L$CURL_LIBDIR -lcurl"
- +], [
-  AC_ARG_WITH([win32-curl], [AC_HELP_STRING([--with-win32-curl=PATH],
-  [set PATH to the Win32 native curl directory])], [
-  # default build on windows is a static lib, and that's what we want too
- @@ -216,12 +244,15 @@ AC_ARG_WITH([win32-curl], 
[AC_HELP_STRING([--with-win32-curl=PATH],
-  ], [
-  AC_CHECK_CURL([7.18.0])
-  CURL_LDFLAGS=-lcurl
- -])
- +])])
-  
-  AC_SUBST(CURL_CFLAGS)
-  AC_SUBST(CURL_LIBS)
-  AC_SUBST(CURL_LDFLAGS)
-  
- +#Probably should fix this up better in the future for cross-compiles in 
general
- +#instead of just keeping this exception for android
- +if test "x$CC" != "xagcc"; then
-  case "$(uname -s)" in
-Linux)
-  LIBS="$LIBS -lcrypt"
- @@ -234,6 +265,7 @@ case "$(uname -s)" in
-  LIBS="$LIBS -lcrypto"
-;;
-  esac
- +fi
-  
-  AC_PATH_PROG([ERL], [erl])
-  
- diff --git a/src/couchdb/priv/couch_js/http.c 
b/src/couchdb/priv/couch_js/http.c
- index 6c2a8a8..5a2112d 100644
- --- a/src/couchdb/priv/couch_js/http.c
- +++ b/src/couchdb/priv/couch_js/http.c
- @@ -43,6 +43,10 @@ char* METHODS[] = {"GET", "HEAD", "POST", "PUT", "DELETE", 
"COPY", NULL};
-  #define DELETE  4
-  #define COPY5
-  
- +#ifdef JSFUN_CONSTRUCTOR
- +#define JSFUN_FAST_NATIVE 0
- +#endif
- +
-  static JSBool
-  go(JSContext* cx, JSObject* obj, HTTPData* http, char* body, size_t blen);
-  
- @@ -50,10 +54,21 @@ static JSString*
-  str_from_binary(JSContext* cx, char* data, size_t length);
-  
-  static JSBool
- +#ifdef JSFUN_CONSTRUCTOR
- +constructor(JSContext* cx, uintN argc, jsval* vp)
- +#else
-  constructor(JSContext* cx, JSObject* obj, uintN argc, jsval* argv, jsval* 
rval)
- +#endif
-  {
-  HTTPData* http = NULL;
-  JSBool ret = JS_FALSE;
- +#ifdef JSFUN_CONSTRUCTOR
- +JSObject* obj = JS_NewObjectForConstructor(cx, vp);
- +if(!obj) {
- +JS_ReportError(cx, "Failed to create 'this' object");
- +goto error;
- +}
- +#endif
-  
-  http = (HTTPData*) malloc(sizeof(HTTPData));
-  if(!http)
- @@ -80,6 +95,9 @@ error:
-  if(http) free(http);
-  
-  success:
- +#ifdef JSFUN_CONSTRUCTOR
- +JS_SET_RVAL(cx, vp, OBJECT_TO_JSVAL(obj));
- 

[Couchdb Wiki] Update of "AndroidMozillaPatch" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "AndroidMozillaPatch" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/AndroidMozillaPatch?action=diff=4=5

Comment:
Moved to 
https://cwiki.apache.org/confluence/display/COUCHDB/Old+Android+Compile+Info

- For use with [[Installing_on_Android]]
  
- {{{
- --- mozilla-central-bb9089ae2322/nsprpub/pr/src/io/prlog.c2011-01-21 
16:40:14.0 -0700
- +++ mozilla-central-bb9089ae2322.bak/nsprpub/pr/src/io/prlog.c
2011-01-31 13:53:30.0 -0700
- @@ -42,7 +42,7 @@
-  #include "prprf.h"
-  #include 
-  #ifdef ANDROID
- -#include 
- +//#include 
-  #endif
-  
-  /*
- @@ -135,7 +135,7 @@
-  if (fd == _pr_stderr) {  \
-  char savebyte = buf[nb]; \
-  buf[nb] = '\0';  \
- -__android_log_write(ANDROID_LOG_INFO, "PRLog", buf); \
- +printf("PRLog: %s", buf);\
-  buf[nb] = savebyte;  \
-  } else { \
-  PR_Write(fd, buf, nb);   \
- --- mozilla-central-bb9089ae2322/js/src/jscntxt.cpp   2011-01-21 
16:40:14.0 -0700
- +++ mozilla-central-bb9089ae2322.bak/js/src/jscntxt.cpp   2011-01-31 
13:49:52.0 -0700
- @@ -46,7 +46,7 @@
-  #include 
-  #include 
-  #ifdef ANDROID
- -# include 
- +//# include 
-  # include 
-  # include 
-  #endif  // ANDROID
- @@ -2218,12 +2218,14 @@
-  // Check for the known-bad kernel version (2.6.29).
-  std::ifstream osrelease("/proc/sys/kernel/osrelease");
-  std::getline(osrelease, line);
- -__android_log_print(ANDROID_LOG_INFO, "Gecko", "Detected osrelease `%s'",
- -line.c_str());
- +//__android_log_print(ANDROID_LOG_INFO, "Gecko", "Detected osrelease 
`%s'",
- +//line.c_str());
- +printf("Gecko: Detected osrelease `%s'", line.c_str());
-  
-  if (line.npos == line.find("2.6.29")) {
-  // We're using something other than 2.6.29, so the JITs should work.
- -__android_log_print(ANDROID_LOG_INFO, "Gecko", "JITs are not 
broken");
- +//__android_log_print(ANDROID_LOG_INFO, "Gecko", "JITs are not 
broken");
- +printf("Gecko: JITs are not broken");
-  return false;
-  }
-  
- @@ -2243,8 +2245,9 @@
-  };
-  for (const char** hw = [0]; *hw; ++hw) {
-  if (line.npos != line.find(*hw)) {
- -__android_log_print(ANDROID_LOG_INFO, "Gecko",
- -"Blacklisted device `%s'", *hw);
- +//__android_log_print(ANDROID_LOG_INFO, "Gecko",
- +//"Blacklisted device `%s'", *hw);
- +printf("Gecko: Blacklisted device `%s'", *hw);
-  broken = true;
-  break;
-  }
- @@ -2254,8 +2257,9 @@
-  std::getline(cpuinfo, line);
-  } while(!cpuinfo.fail() && !cpuinfo.eof());
-  
- -__android_log_print(ANDROID_LOG_INFO, "Gecko", "JITs are %sbroken",
- -broken ? "" : "not ");
- +//__android_log_print(ANDROID_LOG_INFO, "Gecko", "JITs are %sbroken",
- +//broken ? "" : "not ");
- +printf("Gecko: JITs are %sborken", broken ? "" : "not ");
-  
-  return broken;
-  #endif  // ifndef ANDROID
- --- mozilla-central/nsprpub/pr/src/Makefile.in.bak2011-02-02 
15:39:53.0 -0700
- +++ mozilla-central/nsprpub/pr/src/Makefile.in2011-02-02 
15:40:07.0 -0700
- @@ -205,9 +205,9 @@
-  OS_LIBS  = ws2.lib
-  endif
-  
- -ifeq ($(OS_TARGET),Android)
- -OS_LIBS  += -llog
- -endif
- +#ifeq ($(OS_TARGET),Android)
- +#OS_LIBS += -llog
- +#endif
-  
-  ifeq ($(OS_TARGET),MacOSX)
-  OS_LIBS  = -framework CoreServices -framework CoreFoundation
- --- mozilla-central/js/src/configure.in.bak   2011-02-02 15:41:20.0 
-0700
- +++ mozilla-central/js/src/configure.in   2011-02-02 15:41:43.0 
-0700
- @@ -291,11 +291,10 @@
-  CFLAGS="-mandroid -I$android_platform/usr/include -msoft-float 
-fno-short-enums -fno-exceptions -march=armv5te -mthumb-interwork $CFLAGS"
-  CXXFLAGS="-mandroid -I$android_platform/usr/include -msoft-float 
-fno-short-enums -fno-exceptions -march=armv5te -mthumb-interwork $CXXFLAGS"
-  
- -dnl Add -llog by default, since we use it all over the place.
-  dnl Add --allow-shlib-undefined, because libGLESv2 links to an
-  dnl undefined symbol (present on the hardware, just not in the
-  dnl NDK.)
- -LDFLAGS="-mandroid -L$android_platform/usr/lib 
-Wl,-rpath-link=$android_platform/usr/lib --sysroot=$android_platform -llog 
-Wl,--allow-shlib-undefined $LDFLAGS"
- +

[Couchdb Wiki] Update of "AndroidAgcc" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "AndroidAgcc" has been deleted by JoanTouzet:

https://wiki.apache.org/couchdb/AndroidAgcc?action=diff=1=2

Comment:
Moved to 
https://cwiki.apache.org/confluence/display/COUCHDB/Old+Android+Compile+Info

- For use with [[Installing_on_Android]]
  
- The Android toolchain is a little complicated so I used 
http://plausible.org/andy/agcc to simplify the process.  Here is my patch 
against that script:
- 
- {{{
- --- agcc.old  2011-01-23 19:47:49.0 -0700
- +++ agcc  2011-01-29 15:43:58.0 -0700
- @@ -39,7 +39,10 @@
-  my $DROID = $1;
-  
-  my $ALIB = "$DROID/out/target/product/generic/obj/lib";
- -my $TOOLCHAIN = "$DROID/prebuilt/linux-x86/toolchain/arm-eabi-4.2.1";
- +my $TOOLCHAIN = "$DROID/prebuilt/linux-x86/toolchain/arm-eabi-4.4.0";
- +
- +print STDERR "ALIB  $ALIB\n";
- +print STDERR "TOOLCHAIN ... $TOOLCHAIN\n";
-  
-  my @include_paths = (
-  "-I$DROID/system/core/include",
- @@ -85,6 +88,7 @@
-  "-mthumb-interwork",
-  "-fpic",
-  "-fno-exceptions",
- +"-fno-short-enums", # See www for why we added this
-  "-ffunction-sections",
-  "-funwind-tables", # static exception-like tables
-  "-fstack-protector", # check guard variable before return
- @@ -114,7 +118,7 @@
-  "-nostdlib",
-  "$ALIB/crtend_android.o",
-  "$ALIB/crtbegin_dynamic.o",
- -"$TOOLCHAIN/lib/gcc/arm-eabi/4.2.1/interwork/libgcc.a",
- +"$TOOLCHAIN/lib/gcc/arm-eabi/4.4.0/interwork/libgcc.a",
-  "-lc",
-  "-lm");
-  
- @@ -129,7 +133,7 @@
-  "-lc",
-  "-lm",
-  "-Wl,--no-undefined",
- -"$TOOLCHAIN/lib/gcc/arm-eabi/4.2.1/interwork/libgcc.a",
- +"$TOOLCHAIN/lib/gcc/arm-eabi/4.4.0/interwork/libgcc.a",
-  "-Wl,--whole-archive"); # .a, .o input files go *after* here
-  
-  # Now implement a quick parser for a gcc-like command line
- }}}
- 


[couchdb] 02/02: Enforce partition size limits

2018-12-20 Thread davisp
This is an automated email from the ASF dual-hosted git repository.

davisp pushed a commit to branch feature/database-partition-limits
in repository https://gitbox.apache.org/repos/asf/couchdb.git

commit 197d5f2d73e9a331f72f07d4ca7f155c56f44bb1
Author: Paul J. Davis 
AuthorDate: Fri Dec 14 11:06:03 2018 -0600

Enforce partition size limits

This limit helps prevent users from inadvertently misusing partitions by
refusing to add documents when the size of a partition exceeds 10GiB.

Co-authored-by: Robert Newson 
---
 rel/overlay/etc/default.ini|  5 +++
 src/chttpd/src/chttpd.erl  |  3 ++
 src/couch/src/couch_db_updater.erl | 81 --
 3 files changed, 85 insertions(+), 4 deletions(-)

diff --git a/rel/overlay/etc/default.ini b/rel/overlay/etc/default.ini
index a77add4..ae9d313 100644
--- a/rel/overlay/etc/default.ini
+++ b/rel/overlay/etc/default.ini
@@ -64,6 +64,11 @@ default_engine = couch
 ; move deleted databases/shards there instead. You can then manually delete
 ; these files later, as desired.
 ;enable_database_recovery = false
+;
+; Set the maximum size allowed for a partition. This helps users avoid
+; inadvertently abusing partitions resulting in hot shards. The default
+; is 10GiB. A value of 0 or less will disable partition size checks.
+;max_partition_size = 10737418240
 
 [couchdb_engines]
 ; The keys in this section are the filename extension that
diff --git a/src/chttpd/src/chttpd.erl b/src/chttpd/src/chttpd.erl
index 2f241cd..6558b1e 100644
--- a/src/chttpd/src/chttpd.erl
+++ b/src/chttpd/src/chttpd.erl
@@ -873,6 +873,9 @@ error_info(conflict) ->
 {409, <<"conflict">>, <<"Document update conflict.">>};
 error_info({conflict, _}) ->
 {409, <<"conflict">>, <<"Document update conflict.">>};
+error_info({partition_overflow, DocId}) ->
+Descr = <<"'", DocId/binary, "' exceeds partition limit">>,
+{403, <<"partition_overflow">>, Descr};
 error_info({{not_found, missing}, {_, _}}) ->
 {409, <<"not_found">>, <<"missing_rev">>};
 error_info({forbidden, Error, Msg}) ->
diff --git a/src/couch/src/couch_db_updater.erl 
b/src/couch/src/couch_db_updater.erl
index 95508e2..00fee90 100644
--- a/src/couch/src/couch_db_updater.erl
+++ b/src/couch/src/couch_db_updater.erl
@@ -21,6 +21,7 @@
 -include("couch_db_int.hrl").
 
 -define(IDLE_LIMIT_DEFAULT, 61000).
+-define(DEFAULT_MAX_PARTITION_SIZE, 16#28000). % 10 GiB
 
 
 -record(merge_acc, {
@@ -28,7 +29,8 @@
 merge_conflicts,
 add_infos = [],
 rem_seqs = [],
-cur_seq
+cur_seq,
+full_partitions = []
 }).
 
 
@@ -466,13 +468,22 @@ merge_rev_trees([], [], Acc) ->
 merge_rev_trees([NewDocs | RestDocsList], [OldDocInfo | RestOldInfo], Acc) ->
 #merge_acc{
 revs_limit = Limit,
-merge_conflicts = MergeConflicts
+merge_conflicts = MergeConflicts,
+full_partitions = FullPartitions
 } = Acc,
 
 % Track doc ids so we can debug large revision trees
 erlang:put(last_id_merged, OldDocInfo#full_doc_info.id),
 NewDocInfo0 = lists:foldl(fun({Client, NewDoc}, OldInfoAcc) ->
-merge_rev_tree(OldInfoAcc, NewDoc, Client, MergeConflicts)
+NewInfo = merge_rev_tree(OldInfoAcc, NewDoc, Client, MergeConflicts),
+case is_overflowed(NewInfo, OldInfoAcc, FullPartitions) of
+true when not MergeConflicts ->
+DocId = NewInfo#doc.id,
+send_result(Client, NewDoc, {partition_overflow, DocId}),
+OldInfoAcc;
+false ->
+NewInfo
+end
 end, OldDocInfo, NewDocs),
 NewDocInfo1 = maybe_stem_full_doc_info(NewDocInfo0, Limit),
 % When MergeConflicts is false, we updated #full_doc_info.deleted on every
@@ -595,6 +606,16 @@ merge_rev_tree(OldInfo, NewDoc, _Client, true) ->
 {NewTree, _} = couch_key_tree:merge(OldTree, NewTree0),
 OldInfo#full_doc_info{rev_tree = NewTree}.
 
+is_overflowed(_New, _Old, []) ->
+false;
+is_overflowed(Old, Old, _FullPartitions) ->
+false;
+is_overflowed(New, Old, FullPartitions) ->
+Partition = couch_partition:from_docid(New#full_doc_info.id),
+NewSize = estimate_size(New),
+OldSize = estimate_size(Old),
+lists:member(Partition, FullPartitions) andalso NewSize > OldSize.
+
 maybe_stem_full_doc_info(#full_doc_info{rev_tree = Tree} = Info, Limit) ->
 case config:get_boolean("couchdb", "stem_interactive_updates", true) of
 true ->
@@ -617,13 +638,31 @@ update_docs_int(Db, DocsList, LocalDocs, MergeConflicts, 
FullCommit) ->
 (Id, not_found) ->
 #full_doc_info{id=Id}
 end, Ids, OldDocLookups),
+
+%% Get the list of full partitions
+FullPartitions = case couch_db:is_partitioned(Db) of
+true ->
+case max_partition_size() of
+N when N =< 0 ->
+[];
+Max ->
+Partitions = lists:usort(lists:map(fun(Id) ->
+

[couchdb] branch feature/database-partition-limits updated (9d9ec42 -> 197d5f2)

2018-12-20 Thread davisp
This is an automated email from the ASF dual-hosted git repository.

davisp pushed a change to branch feature/database-partition-limits
in repository https://gitbox.apache.org/repos/asf/couchdb.git.


 discard 9d9ec42  Enforce partition size limits
 discard 6c62057  Use an accumulator when merging revision trees
omit 5b295dc  Add Elixir tests for database partitions
omit 98e77cd  Support partitioned queries in Mango
omit 530d8a7  Optimize offset/limit for partition queries
omit 64527ce  Optimize all_docs queries in a single partition
omit 1296d1e  Implement partitioned views
omit f9db721  Implement `couch_db:get_partition_info/2`
omit cc64048  Implement partitioned dbs
omit b508c67  Implement configurable hash functions
omit d9af33e  Validate design document options more strictly
omit 5a1e72e  Pass the DB record to index validation functions
omit ab1bf4d  Implement `fabric_util:open_cluster_db`
omit 63314d9  Improve `couch_db:clustered_db` flexibility
omit 7909b98  Add PSE API to store opaque properties
 add be38d66  Support specifying individual Elixir tests to run
 add 82c9219  Merge branch 'master' into 
allow-specifying-individual-elixir-tests
 add 92adefa  Merge pull request #1800 from 
cloudant/allow-specifying-individual-elixir-tests
 add 11feb2f  Increase timeout on restart in JS/elixir tests to 30s (#1820)
 add 3ad082e  Add PSE API to store opaque properties
 add 60d9ee4  Improve `couch_db:clustered_db` flexibility
 add 92b58ba  Implement `fabric_util:open_cluster_db`
 add e943198  Pass the DB record to index validation functions
 add 1da3631  Validate design document options more strictly
 add ab806a7  Implement configurable hash functions
 add a32d0d6  Implement partitioned dbs
 add d3f508e  Implement `couch_db:get_partition_info/2`
 add 71efe57  Implement partitioned views
 add c5319c4  Optimize all_docs queries in a single partition
 add 329f4e3  Optimize offset/limit for partition queries
 add 004ce09  Use index names when testing index selection
 add 718c872  Support partitioned queries in Mango
 add 005b442  Add Elixir tests for database partitions
 new ec14a51  Use an accumulator when merging revision trees
 new 197d5f2  Enforce partition size limits

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (9d9ec42)
\
 N -- N -- N   refs/heads/feature/database-partition-limits 
(197d5f2)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 Makefile  |  8 +++-
 Makefile.win  |  9 +++-
 src/chttpd/src/chttpd_show.erl|  1 +
 src/couch/src/couch_bt_engine.erl |  4 +-
 src/couch/src/couch_db.erl| 10 ++---
 src/couch/src/couch_db_engine.erl |  2 +-
 src/couch/src/couch_partition.erl | 64 ++-
 src/couch_mrview/src/couch_mrview_updater.erl |  8 ++--
 src/couch_mrview/src/couch_mrview_util.erl| 18 +---
 src/fabric/src/fabric.erl | 10 ++---
 src/fabric/src/fabric_db_partition_info.erl   |  9 ++--
 src/fabric/src/fabric_view.erl|  7 ++-
 src/mango/src/mango_error.erl |  2 +-
 src/mango/src/mango_idx.erl   |  8 ++--
 src/mango/src/mango_opts.erl  |  6 +--
 src/mango/test/05-index-selection-test.py | 20 -
 src/mango/test/user_docs.py   | 37 +---
 test/elixir/lib/couch/db_test.ex  |  2 +-
 test/elixir/run   |  2 +-
 test/elixir/run.cmd   |  2 +-
 test/javascript/test_setup.js |  4 +-
 21 files changed, 139 insertions(+), 94 deletions(-)



[couchdb] 01/02: Use an accumulator when merging revision trees

2018-12-20 Thread davisp
This is an automated email from the ASF dual-hosted git repository.

davisp pushed a commit to branch feature/database-partition-limits
in repository https://gitbox.apache.org/repos/asf/couchdb.git

commit ec14a51e0bd9eb300cf3dc83981eefd96b682c54
Author: Paul J. Davis 
AuthorDate: Fri Dec 14 10:31:02 2018 -0600

Use an accumulator when merging revision trees

This cleans up the `couch_db_updater:merge_rev_trees/7` to instead use
an accumulator argument.
---
 src/couch/src/couch_db_updater.erl | 57 --
 1 file changed, 43 insertions(+), 14 deletions(-)

diff --git a/src/couch/src/couch_db_updater.erl 
b/src/couch/src/couch_db_updater.erl
index c0974aa..95508e2 100644
--- a/src/couch/src/couch_db_updater.erl
+++ b/src/couch/src/couch_db_updater.erl
@@ -23,6 +23,15 @@
 -define(IDLE_LIMIT_DEFAULT, 61000).
 
 
+-record(merge_acc, {
+revs_limit,
+merge_conflicts,
+add_infos = [],
+rem_seqs = [],
+cur_seq
+}).
+
+
 init({Engine, DbName, FilePath, Options0}) ->
 erlang:put(io_priority, {db_update, DbName}),
 update_idle_limit_from_config(),
@@ -450,11 +459,18 @@ doc_tag(#doc{meta=Meta}) ->
 Else -> throw({invalid_doc_tag, Else})
 end.
 
-merge_rev_trees(_Limit, _Merge, [], [], AccNewInfos, AccRemoveSeqs, AccSeq) ->
-{ok, lists:reverse(AccNewInfos), AccRemoveSeqs, AccSeq};
-merge_rev_trees(Limit, MergeConflicts, [NewDocs|RestDocsList],
-[OldDocInfo|RestOldInfo], AccNewInfos, AccRemoveSeqs, AccSeq) ->
-erlang:put(last_id_merged, OldDocInfo#full_doc_info.id), % for debugging
+merge_rev_trees([], [], Acc) ->
+{ok, Acc#merge_acc{
+add_infos = lists:reverse(Acc#merge_acc.add_infos)
+}};
+merge_rev_trees([NewDocs | RestDocsList], [OldDocInfo | RestOldInfo], Acc) ->
+#merge_acc{
+revs_limit = Limit,
+merge_conflicts = MergeConflicts
+} = Acc,
+
+% Track doc ids so we can debug large revision trees
+erlang:put(last_id_merged, OldDocInfo#full_doc_info.id),
 NewDocInfo0 = lists:foldl(fun({Client, NewDoc}, OldInfoAcc) ->
 merge_rev_tree(OldInfoAcc, NewDoc, Client, MergeConflicts)
 end, OldDocInfo, NewDocs),
@@ -475,22 +491,25 @@ merge_rev_trees(Limit, MergeConflicts, 
[NewDocs|RestDocsList],
 end,
 if NewDocInfo2 == OldDocInfo ->
 % nothing changed
-merge_rev_trees(Limit, MergeConflicts, RestDocsList, RestOldInfo,
-AccNewInfos, AccRemoveSeqs, AccSeq);
+merge_rev_trees(RestDocsList, RestOldInfo, Acc);
 true ->
 % We have updated the document, give it a new update_seq. Its
 % important to note that the update_seq on OldDocInfo should
 % be identical to the value on NewDocInfo1.
 OldSeq = OldDocInfo#full_doc_info.update_seq,
 NewDocInfo3 = NewDocInfo2#full_doc_info{
-update_seq = AccSeq + 1
+update_seq = Acc#merge_acc.cur_seq + 1
 },
 RemoveSeqs = case OldSeq of
-0 -> AccRemoveSeqs;
-_ -> [OldSeq | AccRemoveSeqs]
+0 -> Acc#merge_acc.rem_seqs;
+_ -> [OldSeq | Acc#merge_acc.rem_seqs]
 end,
-merge_rev_trees(Limit, MergeConflicts, RestDocsList, RestOldInfo,
-[NewDocInfo3|AccNewInfos], RemoveSeqs, AccSeq+1)
+NewAcc = Acc#merge_acc{
+add_infos = [NewDocInfo3 | Acc#merge_acc.add_infos],
+rem_seqs = RemoveSeqs,
+cur_seq = Acc#merge_acc.cur_seq + 1
+},
+merge_rev_trees(RestDocsList, RestOldInfo, NewAcc)
 end.
 
 merge_rev_tree(OldInfo, NewDoc, Client, false)
@@ -599,8 +618,18 @@ update_docs_int(Db, DocsList, LocalDocs, MergeConflicts, 
FullCommit) ->
 #full_doc_info{id=Id}
 end, Ids, OldDocLookups),
 % Merge the new docs into the revision trees.
-{ok, NewFullDocInfos, RemSeqs, _} = merge_rev_trees(RevsLimit,
-MergeConflicts, DocsList, OldDocInfos, [], [], UpdateSeq),
+AccIn = #merge_acc{
+revs_limit = RevsLimit,
+merge_conflicts = MergeConflicts,
+add_infos = [],
+rem_seqs = [],
+cur_seq = UpdateSeq
+},
+{ok, AccOut} = merge_rev_trees(DocsList, OldDocInfos, AccIn),
+#merge_acc{
+add_infos = NewFullDocInfos,
+rem_seqs = RemSeqs
+} = AccOut,
 
 % Write out the document summaries (the bodies are stored in the nodes of
 % the trees, the attachments are already written to disk)



[Couchdb Wiki] Update of "Advanced_Shows_and_Lists_Throwing_Redirects" by JoanTouzet

2018-12-20 Thread Apache Wiki
Dear wiki user,

You have subscribed to a wiki page "Couchdb Wiki" for change notification.

The page "Advanced_Shows_and_Lists_Throwing_Redirects" has been deleted by 
JoanTouzet:

https://wiki.apache.org/couchdb/Advanced_Shows_and_Lists_Throwing_Redirects?action=diff=5=6

Comment:
Migrated to https://docs.couchdb.org/en/latest/ddocs/ddocs.html

- <>
  
- ## page was renamed from Advanced Shows and Lists: Throwing Redirects
- ## page was renamed from Throw a 404 or a redirect
- = Advanced Shows and Lists: Throwing Redirects =
- 
- == Throw a 404 error ==
- 
- To throw a 404 from inside a _show or _list func .. the easiest way is:
- 
- {{{
- throw (['error', 'not_found', 'Some message like Page not found'])
- }}}
- 
- That will be caught by the top level loop thing and turned into a nice 
response.
- 
- == Return a redirect ==
- 
- There's no top level catcher thing for redirects, so you can't *throw a 
redirect*, you have to 'return' it.
- 
- To do a redirect, there's a library function that will do it for you:
- 
- {{{
- var redirect = require("vendor/couchapp/lib/redirect");
- return redirect.permanent('http://some.new/place');
- }}}
- 
- You can use the path lib to help get some neat urls, have a look at 
vendor/couchapp/lib/path.js source ..
- 
- What this actually does is the equivalent of:
- 
- {{{
- return { code : 301, headers : { "Location" : 'http://some.new/place' } };
- }}}
- 
- The 'code' is the http response code, the http response would be something 
like:
- 
- {{{
- HTTP/1.1 301 Moved Permanently
- Vary: Accept
- Server: CouchDB/1.0.1 (Erlang OTP/R13B)
- Location: http://foot:5984/products/dvrs/standalone-h.264-16-channel-dvr
- Etag: "CYFPH3WEUO9R5B8S26QWV3NK4"
- Date: Sat, 11 Sep 2010 04:51:37 GMT
- Content-Type: application/json
- Content-Length: 0
- }}}
- 
- = Case study, Making a Redirect Throwable =
- 
- If you're deep in a lib/mystuff.js func, I use this sort of style for a 
certain checker function.
- 
- In this example I have different product lines in my rewrites, of the form:
- 
- {{{
- [
- {
- "from":   "/products/dvrs",
- "to": "_list/dvrs",
- "method": "GET" 
- },
- {
- "from":   "/products/dvrs/new",
- "to": "_show/dvr-new",
- "method": "GET" 
- },
- {
- "from":   "/products/dvrs/new",
- "to": "_show/dvr-save",
- "method": "POST" 
- },
- {
- "from":   "/products/dvrs/*",
- "to": "_show/dvr/*",
- "method": "GET" 
- },
- // Then a bunch of similar things for other product lines (like 
/products/cameras/ etc ..)
- ]
- }}}
- 
- The trouble is if someone uses the wrong url for the type of product, I want 
to put them right, this will help keep the
- search engine optimization good too. So if someone goes: 
- 
- {{{
- http://mysite.kom/products/cameras/id_of_a_dvr
- }}}
- 
- It will call the _show/camera method with the id of a document that has 
{type: 'dvr'}. Which is bad because I use a
- different template but also bad because SEO will freak out if it ends up in 
google.
- 
- To fix it I did the following:
- 
- In my _show or _list func I put:
- 
- {{{
- var rewrite = require("lib/rewritehelper").init(req);
- // If we're trying to use this view to see something other than a dvr, 
make sure we're looking at the right type
-   try {rewrite.checkType(doc, 'camera');} catch (e) {return e;}
- }}}
- 
- I know people will yell at me for puting too much on one line. But I'm using 
this in all my _show and _list funcs, I
- want it to be nice and short.
- 
- In my *lib/rewritehelper.js* file I check the type and all that and if it's 
wrong I throw an exception, which the _show
- func catches and returns as is. Here's the full code for my lib function so 
far:
- 
- {{{
- /* Helps you get the rewrites right.
- 
- Complements the rewrites.json .. really both should be updated at the 
same time. Saves having to update every place you
- put a path to something.
- 
- */
- 
- exports.init = function(req) {
- 
- rewrite = {};
- 
- rewrite.url4doc = function (_id, prodType) {
- /* Returns a url to a product, takes doc._id and doc._type */
- // See if it's a product
- switch (prodType) {
- case "forum-topic":
- case "forum-post":
- return "http://; + req.headers.Host + "/forums/" + 
prodType + "s/" + _id;
- case "support":
- return "http://; + req.headers.Host + "/support/" + 
prodType + "s/" + _id;
- case "dvr": 
- case "camera": 
- return "http://; + req.headers.Host + "/products/" + 
prodType + "s/" + _id;
- 

[couchdb-documentation] branch master updated: More how tos (#371)

2018-12-20 Thread wohali
This is an automated email from the ASF dual-hosted git repository.

wohali pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git


The following commit(s) were added to refs/heads/master by this push:
 new 1d9ce5c  More how tos (#371)
1d9ce5c is described below

commit 1d9ce5cfa908ed30ee82b7eee6864d1768a69c61
Author: Joan Touzet 
AuthorDate: Thu Dec 20 11:30:55 2018 -0500

More how tos (#371)

* Typos and fixes from @flimzy review
* Migrated Apache as a reverse proxy from MoinMoin; called out HAProxy more 
prominently
* Adding pointer to more Erlang query server examples
* Add link to video on vhost/rewrite behaviour
---
 Makefile   |  2 +-
 src/best-practices/documents.rst   |  9 ++--
 src/best-practices/reverse-proxies.rst | 87 --
 src/config/http.rst|  4 ++
 src/config/query-servers.rst   |  3 ++
 5 files changed, 96 insertions(+), 9 deletions(-)

diff --git a/Makefile b/Makefile
index 62d75e2..d9b157a 100644
--- a/Makefile
+++ b/Makefile
@@ -45,7 +45,7 @@ man: $(SPHINXBUILD)
$(SPHINXBUILD) -b $@ $(SPHINXOPTS) $(BUILDDIR)/$@
 
 check:
-   python ext/linter.py $(SOURCE)
+   python3 ext/linter.py $(SOURCE)
 
 install-html:
 install-pdf:
diff --git a/src/best-practices/documents.rst b/src/best-practices/documents.rst
index 6230679..9cc181c 100644
--- a/src/best-practices/documents.rst
+++ b/src/best-practices/documents.rst
@@ -141,8 +141,7 @@ periodically, and are disconnected for more than this time 
before they
 resynchronise.
 
 All of the approaches below which allow automated merging of changes rely on
-having some sort of history back in time to the point where the replicas
-diverged.
+having some sort of history, back to the point where the replicas diverged.
 
 CouchDB does not provide a mechanism for this itself. It stores arbitrary
 numbers of old _ids for one document (trunk now has a mechanism for pruning the
@@ -160,7 +159,7 @@ live replicas last diverged.
 Approach 1: Single JSON doc
 ^^^
 
-The above structure is already valid Javascript, and so could be represented in
+The above structure is already valid JSON, and so could be represented in
 CouchDB just by wrapping it in an object and storing as a single document:
 
 .. code-block:: javascript
@@ -328,8 +327,8 @@ layer on the database to test each transaction. Some 
advantages are:
 
 * Only the client or someone with the knowledge of the name and password can 
compute
   the value of SHA256 and recover the data.
-* Some columns are still left in the clear, an advantage if the marketing 
department
-  wants to compute aggregated statistics.
+* Some columns are still left in the clear, an advantage for computing 
aggregated
+  statistics.
 * Computation of SHA256 is left to the client side computer which usually has 
cycles
   to spare.
 * The system prevents server-side snooping by insiders and any attacker who 
might
diff --git a/src/best-practices/reverse-proxies.rst 
b/src/best-practices/reverse-proxies.rst
index 051e7f5..5105960 100644
--- a/src/best-practices/reverse-proxies.rst
+++ b/src/best-practices/reverse-proxies.rst
@@ -16,15 +16,54 @@
 Reverse Proxies
 
 
+Reverse proxying with HAProxy
+=
+
 CouchDB recommends the use of `HAProxy`_ as a load balancer and reverse proxy.
 The team's experience with using it in production has shown it to be superior
 for configuration and montioring capabilities, as well as overall performance.
 
 CouchDB's sample haproxy configuration is present in the `code repository`_ and
-release tarball as ``rel/haproxy.cfg``.
+release tarball as ``rel/haproxy.cfg``. It is included below. This example
+is for a 3 node CouchDB cluster:
+
+.. code-block:: text
 
-However, there are suitable alternatives. Below are examples for
-configuring nginx and Caddy web-servers appropriately.
+global
+maxconn 512
+spread-checks 5
+
+defaults
+mode http
+log global
+monitor-uri /_haproxy_health_check
+option log-health-checks
+option httplog
+balance roundrobin
+option forwardfor
+option redispatch
+retries 4
+option http-server-close
+timeout client 15
+timeout server 360
+timeout connect 500
+
+stats enable
+stats uri /_haproxy_stats
+# stats auth admin:admin # Uncomment for basic auth
+
+frontend http-in
+ # This requires HAProxy 1.5.x
+ # bind *:$HAPROXY_PORT
+ bind *:5984
+ default_backend couchdbs
+
+backend couchdbs
+option httpchk GET /_up
+http-check disable-on-404
+server couchdb1 x.x.x.x:5984 check inter 5s
+server couchdb2 x.x.x.x:5984 check inter 5s
+server couchdb2 x.x.x.x:5984 check inter 5s
 
 .. 

[couchdb-documentation] branch more-how-tos deleted (was ff3eb0a)

2018-12-20 Thread wohali
This is an automated email from the ASF dual-hosted git repository.

wohali pushed a change to branch more-how-tos
in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git.


 was ff3eb0a  Add link to video on vhost/rewrite behaviour

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.



[couchdb-documentation] branch more-how-tos updated (56365b5 -> ff3eb0a)

2018-12-20 Thread wohali
This is an automated email from the ASF dual-hosted git repository.

wohali pushed a change to branch more-how-tos
in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git.


 discard 56365b5  Add link to video on vhost/rewrite behaviour
 add ff3eb0a  Add link to video on vhost/rewrite behaviour

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (56365b5)
\
 N -- N -- N   refs/heads/more-how-tos (ff3eb0a)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 src/best-practices/reverse-proxies.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[couchdb] branch master updated: Suppress credo TODO suggests (#1822)

2018-12-20 Thread wohali
This is an automated email from the ASF dual-hosted git repository.

wohali pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/couchdb.git


The following commit(s) were added to refs/heads/master by this push:
 new f60f7a1  Suppress credo TODO suggests (#1822)
f60f7a1 is described below

commit f60f7a1c66a9b238ae66798b82058a4d9dc82731
Author: Ivan Mironov 
AuthorDate: Thu Dec 20 11:13:41 2018 -0500

Suppress credo TODO suggests (#1822)
---
 test/elixir/.credo.exs | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/test/elixir/.credo.exs b/test/elixir/.credo.exs
index 48ae452..e24836c 100644
--- a/test/elixir/.credo.exs
+++ b/test/elixir/.credo.exs
@@ -70,7 +70,7 @@
 # If you don't want TODO comments to cause `mix credo` to fail, just
 # set this value to 0 (zero).
 #
-{Credo.Check.Design.TagTODO, [exit_status: 0]},
+{Credo.Check.Design.TagTODO, false},
 {Credo.Check.Design.TagFIXME, []},
 
 #
@@ -108,7 +108,10 @@
 {Credo.Check.Refactor.NegatedConditionsWithElse, []},
 {Credo.Check.Refactor.Nesting, false},
 {Credo.Check.Refactor.PipeChainStart,
- [excluded_argument_types: [:atom, :binary, :fn, :keyword], 
excluded_functions: []]},
+ [
+   excluded_argument_types: [:atom, :binary, :fn, :keyword],
+   excluded_functions: []
+ ]},
 {Credo.Check.Refactor.UnlessWithElse, []},
 
 #