[jira] [Created] (COUCHDB-3359) couch_peruser flag in configuration is not working...
ASF subversion and git services created COUCHDB-3359: Summary: couch_peruser flag in configuration is not working... Key: COUCHDB-3359 URL: https://issues.apache.org/jira/browse/COUCHDB-3359 Project: CouchDB Issue Type: Bug Reporter: ASF subversion and git services couch_peruser flag in configuration is not working in CouchDB 2.0 *Reporter*: Shikhar Bansal *E-mail*: [mailto:bshikhar13131...@gmail.com] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] wohali opened a new pull request #894: Fix waitForAttribute.js license
wohali opened a new pull request #894: Fix waitForAttribute.js license URL: https://github.com/apache/couchdb-fauxton/pull/894 We incorrectly labelled this file as APACHE licensed, but per upstream (see https://github.com/apache/couchdb-fauxton/pull/186) this is MIT licensed. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] couchdb-couch-replicator issue #64: 63012 scheduler
Github user nickva commented on the issue: https://github.com/apache/couchdb-couch-replicator/pull/64 Another, cleaned up version of this PR was issued after the monorepo merge. https://github.com/apache/couchdb/pull/470 Keep this one open for a bit longer just in case and to allow comparisons. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (COUCHDB-3324) Scheduling Replicator
[ https://issues.apache.org/jira/browse/COUCHDB-3324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956270#comment-15956270 ] ASF GitHub Bot commented on COUCHDB-3324: - Github user nickva closed the pull request at: https://github.com/apache/couchdb-chttpd/pull/158 > Scheduling Replicator > - > > Key: COUCHDB-3324 > URL: https://issues.apache.org/jira/browse/COUCHDB-3324 > Project: CouchDB > Issue Type: New Feature >Reporter: Nick Vatamaniuc > > Improve CouchDB replicator > * Allow running a large number of replication jobs > * Improve API with a focus on ease of use and performance. Avoid updating > replication document with transient state updates. Instead create a proper > API for querying replication states. At the same time provide a compatibility > mode to let users keep existing behavior (of getting updates in documents). > * Improve network resource usage and performance. Multiple connection to the > same cluster could share socket connection > * Handle rate limiting on target and source HTTP endpoints. Let replication > request auto-discover rate limit capacity based on a proven algorithm such as > Additive Increase / Multiplicative Decrease feedback control loop. > * Improve performance by avoiding repeatedly retrying failing replication > jobs. Instead use exponential backoff. > * Improve recovery from long (but temporary) network failure. Currently if > replications jobs fail to start 10 times in a row they will not be retried > anymore. This is not always desirable. In case of a long enough DNS (or other > network) failure replication jobs will effectively stop until they are > manually restarted. > * Better handling of filtered replications: Failing to fetch filters could > block couch replicator manager, lead to message queue backups and memory > exhaustion. Also, when replication filter code changes update replication > accordingly (replication job ID should change in that case) > * Provide better metrics to introspect replicator behavior. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] couchdb-chttpd issue #158: 63012 scheduler
Github user nickva commented on the issue: https://github.com/apache/couchdb-chttpd/pull/158 Closing this PR. Another one was issued after the monorepo merge. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] couchdb-chttpd pull request #158: 63012 scheduler
Github user nickva closed the pull request at: https://github.com/apache/couchdb-chttpd/pull/158 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (COUCHDB-3324) Scheduling Replicator
[ https://issues.apache.org/jira/browse/COUCHDB-3324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956268#comment-15956268 ] ASF GitHub Bot commented on COUCHDB-3324: - Github user nickva closed the pull request at: https://github.com/apache/couchdb-couch/pull/238 > Scheduling Replicator > - > > Key: COUCHDB-3324 > URL: https://issues.apache.org/jira/browse/COUCHDB-3324 > Project: CouchDB > Issue Type: New Feature >Reporter: Nick Vatamaniuc > > Improve CouchDB replicator > * Allow running a large number of replication jobs > * Improve API with a focus on ease of use and performance. Avoid updating > replication document with transient state updates. Instead create a proper > API for querying replication states. At the same time provide a compatibility > mode to let users keep existing behavior (of getting updates in documents). > * Improve network resource usage and performance. Multiple connection to the > same cluster could share socket connection > * Handle rate limiting on target and source HTTP endpoints. Let replication > request auto-discover rate limit capacity based on a proven algorithm such as > Additive Increase / Multiplicative Decrease feedback control loop. > * Improve performance by avoiding repeatedly retrying failing replication > jobs. Instead use exponential backoff. > * Improve recovery from long (but temporary) network failure. Currently if > replications jobs fail to start 10 times in a row they will not be retried > anymore. This is not always desirable. In case of a long enough DNS (or other > network) failure replication jobs will effectively stop until they are > manually restarted. > * Better handling of filtered replications: Failing to fetch filters could > block couch replicator manager, lead to message queue backups and memory > exhaustion. Also, when replication filter code changes update replication > accordingly (replication job ID should change in that case) > * Provide better metrics to introspect replicator behavior. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] couchdb-couch pull request #238: Add _replication_start_time to the doc fiel...
Github user nickva closed the pull request at: https://github.com/apache/couchdb-couch/pull/238 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] couchdb-couch issue #238: Add _replication_start_time to the doc field valid...
Github user nickva commented on the issue: https://github.com/apache/couchdb-couch/pull/238 Closing this one. A new PR was issued after the monorepo merge --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nickva commented on issue #118: Update changes feed documentation
nickva commented on issue #118: Update changes feed documentation URL: https://github.com/apache/couchdb-documentation/pull/118#issuecomment-291744957 Wonder if there is a way to improve this one a bit. I like the enumeration of possible values for `feed` inline, but notice that below is a whole section describing each one. Every section has an anchor so perhaps we could create a ref to it, after a brief description. So maybe something like **normal** all past changes returned immediately :ref:`normal` ...? For longpoll if we keep the short description maybe mention first that it is like normal mostly, "unless we give it a since=now or the last rev explicitly". This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] flimzy closed pull request #121: Update docs for PUT attachments in 1.6.x branch
flimzy closed pull request #121: Update docs for PUT attachments in 1.6.x branch URL: https://github.com/apache/couchdb-documentation/pull/121 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] nickva commented on issue #121: Update docs for PUT attachments in 1.6.x branch
nickva commented on issue #121: Update docs for PUT attachments in 1.6.x branch URL: https://github.com/apache/couchdb-documentation/pull/121#issuecomment-291730808 +1 Thanks! This is related to https://github.com/apache/couchdb-documentation/pull/120 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] flimzy closed pull request #120: Update docs for PUT attachments
flimzy closed pull request #120: Update docs for PUT attachments URL: https://github.com/apache/couchdb-documentation/pull/120 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] nickva commented on issue #120: Update docs for PUT attachments
nickva commented on issue #120: Update docs for PUT attachments URL: https://github.com/apache/couchdb-documentation/pull/120#issuecomment-291729467 +1 Thank you! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (COUCHDB-3358) Change O(n^2) get function to be more performant
Tony Sun created COUCHDB-3358: - Summary: Change O(n^2) get function to be more performant Key: COUCHDB-3358 URL: https://issues.apache.org/jira/browse/COUCHDB-3358 Project: CouchDB Issue Type: Bug Components: Mango Reporter: Tony Sun This is related to this https://issues.apache.org/jira/browse/COUCHDB-2951. When a user has a document with lots of field names, or nested fields with arrays, we add these fields to a special $fieldnames field. However, as we add them , we're calling lists:member on that same Acc, making it a O(n^2) operation. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] nickva opened a new pull request #470: 63012 scheduler
nickva opened a new pull request #470: 63012 scheduler URL: https://github.com/apache/couchdb/pull/470 Introduce Scheduling CouchDB Replicator Jira: https://issues.apache.org/jira/browse/COUCHDB-3324 The core of the new replicator is a scheduler. It which allows running a large number of replication jobs by switching between them, stopping some and starting others periodically. Jobs which fail are backed off exponentially. There is also an improved inspection and querying API: `_scheduler/jobs` and `_scheduler/docs`. Replication protocol hasn't change so it is possible to replicate between CouchDB 1.x, 2.x, PouchDB, and other implementations of CouchDB replication protocol. ## Scheduler Scheduler allows running a large number of replication jobs. Tested with up to 100k replication jobs in 3 node cluster. Replication jobs are run in a fair, round-robin fashion. Scheduler behavior can be configured by these configuration options in `[replicator]` sections: * `max_jobs` : Number of actively running replications. Making this too high could cause performance issues. Making it too low could mean replications jobs might not have enough time to make progress before getting unscheduled again. This parameter can be adjusted at runtime and will take effect during next rescheduling cycle. * `interval` : Scheduling interval in milliseconds. During each reschedule cycle scheduler might start or stop up to "max_churn" number of jobs. * `max_churn` : Maximum number of replications to start and stop during rescheduling. This parameter along with "interval" defines the rate of job replacement. During startup, however a much larger number of jobs could be started (up to max_jobs) in a short period of time. ## _scheduler/{jobs,docs} API There is an improved replication state querying API, with a focus on ease of use and performance. The new API avoids having to update the replication document with transient state updates. In production that can lead to conflicts and performance issues. The two new APIs are: * `_scheduler/jobs` : This endpoint shows active replication jobs. These are jobs managed by the scheduler. Some of them might be running, some might be waiting to run, or backed off (penalized) because they crashed too many times. Semantically this is somewhat equivalent to `_active_tasks` but focuses only on replications. Jobs which have completed or which were never created because of malformed replication document will not be shown here as they are not managed by the scheduler. `_replicate` replications, started form `_replicate` endpoint not from a document in a `_replicator` db, will also show up here. * `_scheduler/docs` : This endpoint is an improvement on having to go back and re-read replication document to query their state. It represents the state of all the replications started from documents in `_replicator` dbs. Unlike `_scheduler/jobs` it will also show jobs which have failed or completed (that is, which are not managed by the scheduler anymore). ## Compatibility Mode Understandably some customers are using the document-based API to query replication states (`triggered`, `error`, `completed` etc). To ease the upgrade path, there is a compatibility configuration setting: ``` [replicator] update_docs = false | true ``` It defaults to `false` but when set to `true` it will continue updating replication document with the state of the replication jobs. ## Other Improvements * Network resource usage and performance was improved by implementing a common connection pool. This should help in cases of a large number of connections to the same sources or target. Previously connection pools were shared only withing a single replication job. * Improved rate limiting handling. Replicator requests will auto-discover rate limit capacity on target and sources based on a proven Additive Increase / Multiplicative Decrease feedback control algorithm. * Improve performance by avoiding repeatedly retrying failing replication jobs. Instead use exponential backoff. In a large multi-user cluster, quite a few replication jobs are invalid, are crashing or failing (for various reasons such as inability to checkpoint to source, mismatched credentials, missing databases). Penalizing failing replication will free up system resources for more useful work. * Improve recovery from long but temporary network failure. Currently if replications jobs fail to start 10 times in a row, they will not be retried anymore. This is sometimes desirable, but in some scenarios (for example, after a sustained DNS failure which eventually recovers), replications reach their retry limit and cease to work. Previously it required operator intervention to continue.
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955895#comment-15955895 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit 314ee06a5a1588a419c23d1f9c7205b612f4c4f0 in couchdb-couch's branch refs/heads/COUCHDB-3287-mixed-db-records from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=314ee06 ] Reorganize exports from couch_db.erl Since we're getting ready to add API functions to couch_db.erl now is a good time to clean up the exports list so that changes are more easily tracked. COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3326) Implement clustered purge API: _purge
[ https://issues.apache.org/jira/browse/COUCHDB-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955893#comment-15955893 ] ASF subversion and git services commented on COUCHDB-3326: -- Commit 016e1aa0ef4db0bbf47a28a2cce48b85200702d6 in couchdb-couch's branch refs/heads/COUCHDB-3287-mixed-db-records from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=016e1aa ] Implement an ETS-basd couch_lru Use a monotonicaly incrementing counter instead of `erlang:now()`. We don't technically need to time-based functionality and just want to know relative insertion order. Instead of gb_tree, use an ordered_set ETS. This keep items sorted by their update order, with most recent ones at the bottom. An set ETS replaces the dictionary which maintains a mapping from database names to their entry in updates table. Interface is the same as the old couch_lru, so it is a direct swap in. Thanks to Eric Avdey for intial version of test module. COUCHDB-3326 > Implement clustered purge API: _purge > - > > Key: COUCHDB-3326 > URL: https://issues.apache.org/jira/browse/COUCHDB-3326 > Project: CouchDB > Issue Type: New Feature > Components: Database Core, Documentation, HTTP Interface >Reporter: Mayya Sharipova > > This implements the clustered purge API: > {code:} > curl -H 'Content-Type: application/json' -X POST > "http://adm:pass@127.0.0.1:5984/test1/_purge"; -d > '{"d1":["3-410e46c04b51b4c3304ed232790a49da", > "3-420e46c04b51b4c3304ed232790a35db"],"d2":["2-a39d6d63f29a956ae39930f84dd71ec3"], > "d3":["1-bdca7a3ac9503bf6e46d7d7a782e8f03"]}' > {code} > Response: status_code 201 or 202 > {code:javascript} > { > "purged": [ > { > "ok": true, //Quorum was reached, at least W nodes > successfully purged doc > "id": "d1", > "revs": [ > "3-410e46c04b51b4c3304ed232790a49da", >"3-420e46c04b51b4c3304ed232790a35db" > ] > }, > { > "accepted": true, //Quorum was NOT reached, but request was > accepted > "id": "d2", > "revs": [ > "2-a39d6d63f29a956ae39930f84dd71ec3" > ] > }, > { > "ok": true, > "id": "d3", > "revs": []//(DocId or Revs missing) OR (Revs are not leaf > revisions) > } ], > "purge_seq": > "6-g1BMeJzLYWBgYMpgTmHgz8tPSTV2MDQy1zMAQsMckEQiQ5L8sxKZ4UoMcSrJAgC9PRRl" > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955900#comment-15955900 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit c515bcae97a9eaa7d515a4620ba0e44a3f1fa2ef in couchdb-couch's branch refs/heads/COUCHDB-3287-mixed-db-records from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=c515bca ] Remove public access to the db record This completes the removal of public access to the db record from the couch application. The large majority of which is removing direct access to the #db.name, #db.main_pid, and #db.update_seq fields. COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3287) Implement pluggable storage engines
[ https://issues.apache.org/jira/browse/COUCHDB-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955901#comment-15955901 ] ASF subversion and git services commented on COUCHDB-3287: -- Commit 5f6ff5ab15360acf32b22e146bd83a897999f06e in couchdb-couch's branch refs/heads/COUCHDB-3287-mixed-db-records from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=5f6ff5a ] Allow for mixed db record definitions This change is to account for differences in the #db record when a cluster is operating in a mixed version state (i.e., when running a rolling reboot to upgrade). There are only a few operations that are valid on #db records that are shared between nodes so rather than attempt to map the entire API between the old and new records we're limiting to just the required API calls. COUCHDB-3287 > Implement pluggable storage engines > --- > > Key: COUCHDB-3287 > URL: https://issues.apache.org/jira/browse/COUCHDB-3287 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > Opening branches for the pluggable storage engine work described here: > http://mail-archives.apache.org/mod_mbox/couchdb-dev/201606.mbox/%3CCAJ_m3YDjA9xym_JRVtd6Xi7LX7Ajwc6EmH_wyCRD1jgTzk8mKA%40mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955897#comment-15955897 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit 73c273ff93807cb845ccacd769d0ab7e1030b69d in couchdb-couch's branch refs/heads/COUCHDB-3287-mixed-db-records from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=73c273f ] Update couch_server to not use the db record This removes introspection of the #db record by couch_server. While its required for the pluggable storage engine upgrade, its also nice to remove the hacky overloading of #db record fields for couch_server logic. COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955898#comment-15955898 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit 552a29d1e107014dab6c209f2f91a5c33af89728 in couchdb-couch's branch refs/heads/COUCHDB-3287-mixed-db-records from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=552a29d ] Add a test helper for creating fake db records COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955896#comment-15955896 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit e1491f1cca38c5cd78e306d999731047f744680e in couchdb-couch's branch refs/heads/COUCHDB-3287-mixed-db-records from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=e1491f1 ] Move calculate_start_seq and owner_of These functions were originally implemented in fabric_rpc.erl where they really didn't belong. Moving them to couch_db.erl allows us to keep the unit tests intact rather than just removing them now that the #db record is being made private. COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955883#comment-15955883 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit 314ee06a5a1588a419c23d1f9c7205b612f4c4f0 in couchdb-couch's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=314ee06 ] Reorganize exports from couch_db.erl Since we're getting ready to add API functions to couch_db.erl now is a good time to clean up the exports list so that changes are more easily tracked. COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955888#comment-15955888 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit c515bcae97a9eaa7d515a4620ba0e44a3f1fa2ef in couchdb-couch's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=c515bca ] Remove public access to the db record This completes the removal of public access to the db record from the couch application. The large majority of which is removing direct access to the #db.name, #db.main_pid, and #db.update_seq fields. COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955887#comment-15955887 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit 552a29d1e107014dab6c209f2f91a5c33af89728 in couchdb-couch's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=552a29d ] Add a test helper for creating fake db records COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955884#comment-15955884 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit e1491f1cca38c5cd78e306d999731047f744680e in couchdb-couch's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=e1491f1 ] Move calculate_start_seq and owner_of These functions were originally implemented in fabric_rpc.erl where they really didn't belong. Moving them to couch_db.erl allows us to keep the unit tests intact rather than just removing them now that the #db record is being made private. COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3326) Implement clustered purge API: _purge
[ https://issues.apache.org/jira/browse/COUCHDB-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955882#comment-15955882 ] ASF subversion and git services commented on COUCHDB-3326: -- Commit 016e1aa0ef4db0bbf47a28a2cce48b85200702d6 in couchdb-couch's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=016e1aa ] Implement an ETS-basd couch_lru Use a monotonicaly incrementing counter instead of `erlang:now()`. We don't technically need to time-based functionality and just want to know relative insertion order. Instead of gb_tree, use an ordered_set ETS. This keep items sorted by their update order, with most recent ones at the bottom. An set ETS replaces the dictionary which maintains a mapping from database names to their entry in updates table. Interface is the same as the old couch_lru, so it is a direct swap in. Thanks to Eric Avdey for intial version of test module. COUCHDB-3326 > Implement clustered purge API: _purge > - > > Key: COUCHDB-3326 > URL: https://issues.apache.org/jira/browse/COUCHDB-3326 > Project: CouchDB > Issue Type: New Feature > Components: Database Core, Documentation, HTTP Interface >Reporter: Mayya Sharipova > > This implements the clustered purge API: > {code:} > curl -H 'Content-Type: application/json' -X POST > "http://adm:pass@127.0.0.1:5984/test1/_purge"; -d > '{"d1":["3-410e46c04b51b4c3304ed232790a49da", > "3-420e46c04b51b4c3304ed232790a35db"],"d2":["2-a39d6d63f29a956ae39930f84dd71ec3"], > "d3":["1-bdca7a3ac9503bf6e46d7d7a782e8f03"]}' > {code} > Response: status_code 201 or 202 > {code:javascript} > { > "purged": [ > { > "ok": true, //Quorum was reached, at least W nodes > successfully purged doc > "id": "d1", > "revs": [ > "3-410e46c04b51b4c3304ed232790a49da", >"3-420e46c04b51b4c3304ed232790a35db" > ] > }, > { > "accepted": true, //Quorum was NOT reached, but request was > accepted > "id": "d2", > "revs": [ > "2-a39d6d63f29a956ae39930f84dd71ec3" > ] > }, > { > "ok": true, > "id": "d3", > "revs": []//(DocId or Revs missing) OR (Revs are not leaf > revisions) > } ], > "purge_seq": > "6-g1BMeJzLYWBgYMpgTmHgz8tPSTV2MDQy1zMAQsMckEQiQ5L8sxKZ4UoMcSrJAgC9PRRl" > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2992) Add additional support for document size
[ https://issues.apache.org/jira/browse/COUCHDB-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955881#comment-15955881 ] ASF subversion and git services commented on COUCHDB-2992: -- Commit 7e48bda4459cc8e4dbb8bd86966792f533571d83 in couchdb-couch's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=7e48bda ] Allow limiting maximum document body size Configuration is via the `couchdb.max_document_size`. In the past that was implemented as a maximum http request body size and this finally implements it by actually checking a document's body size. COUCHDB-2992 > Add additional support for document size > > > Key: COUCHDB-2992 > URL: https://issues.apache.org/jira/browse/COUCHDB-2992 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Tony Sun > > Currently, only max_document_size of 64 GB is the restriction for users > creating documents. Large documents often leads to issues with our indexers. > This feature will allow users more finer grain control over document size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955885#comment-15955885 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit 73c273ff93807cb845ccacd769d0ab7e1030b69d in couchdb-couch's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=73c273f ] Update couch_server to not use the db record This removes introspection of the #db record by couch_server. While its required for the pluggable storage engine upgrade, its also nice to remove the hacky overloading of #db record fields for couch_server logic. COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3324) Scheduling Replicator
[ https://issues.apache.org/jira/browse/COUCHDB-3324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955858#comment-15955858 ] ASF subversion and git services commented on COUCHDB-3324: -- Commit 8ab8d1c0b2ec8a1dfb84f804796001610448920e in couchdb's branch refs/heads/63012-scheduler from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb.git;h=8ab8d1c ] Stitch scheduling replicator together. Glue together all the scheduling replicator pieces. Scheduler is the main component. It can run a large number of replication jobs by switching between them, stopping and starting some periodically. Jobs which fail are backed off exponentially. Normal (non-continuous) jobs will be allowed to run to completion to preserve their current semantics. Scheduler behavior can configured by these configuration options in `[replicator]` sections: * `max_jobs` : Number of actively running replications. Making this too high could cause performance issues. Making it too low could mean replications jobs might not have enough time to make progress before getting unscheduled again. This parameter can be adjusted at runtime and will take effect during next reschudling cycle. * `interval` : Scheduling interval in milliseconds. During each reschedule cycle scheduler might start or stop up to "max_churn" number of jobs. * `max_churn` : Maximum number of replications to start and stop during rescheduling. This parameter along with "interval" defines the rate of job replacement. During startup, however a much larger number of jobs could be started (up to max_jobs) in short period of time. Replication jobs are added to the scheduler by the document processor or from the `couch_replicator:replicate/2` function when called from `_replicate` HTTP endpoint handler. Document processor listens for updates via couch_mutlidb_changes module then tries to add replication jobs to the scheduler. Sometimes translating a document update to a replication job could fail, either permantly (if document is malformed and missing some expected fields for example) or temporarily if it is a filtered replication and filter cannot be fetched. A failed filter fetch will be retried with an exponential backoff. couch_replicator_clustering is in charge of monitoring cluster membership changes. When membership changes, after a configurable quiet period, a rescan will be initiated. Rescan will shufle replication jobs to make sure a replication job is running on only one node. A new set of stats were added to introspect scheduler and doc processor internals. The top replication supervisor structure is `rest_for_one`. This means if a child crashes, all children to the "right" of it will be restarted (if visualized supervisor hierarchy as an upside-down tree). Clustering, connection pool and rate limiter are towards the "left" as they are more fundamental, if clustering child crashes, most other components will be restart. Doc process or and multi-db changes children are towards the "right". If they crash, they can be safely restarted without affecting already running replication or components like clustering or connection pool. Jira: COUCHDB-3324 > Scheduling Replicator > - > > Key: COUCHDB-3324 > URL: https://issues.apache.org/jira/browse/COUCHDB-3324 > Project: CouchDB > Issue Type: New Feature >Reporter: Nick Vatamaniuc > > Merge scheduling replicator -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955857#comment-15955857 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit a1107b5fe8bf594bc2e8070b10fcd939a1b090c5 in couchdb-mango's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-mango.git;h=a1107b5 ] Remove public db record COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3202) do not allow empty field names
[ https://issues.apache.org/jira/browse/COUCHDB-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955856#comment-15955856 ] ASF subversion and git services commented on COUCHDB-3202: -- Commit 6660b37d6813823804df3444f532abf6736936f7 in couchdb-mango's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~tonysun83] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-mango.git;h=6660b37 ] Do not allow empty field name Currently, the indexer crashes when a field name is empty. Even though it's valid json, we should disallow empty field names to coincide with selector syntax that requires a non-empty field name for queries. COUCHDB-3202 > do not allow empty field names > -- > > Key: COUCHDB-3202 > URL: https://issues.apache.org/jira/browse/COUCHDB-3202 > Project: CouchDB > Issue Type: Bug > Components: Mango >Reporter: Tony Sun > > {"" : "foo} crashes our mango indexer. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2992) Add additional support for document size
[ https://issues.apache.org/jira/browse/COUCHDB-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955851#comment-15955851 ] ASF subversion and git services commented on COUCHDB-2992: -- Commit 3b15107df83a16a26dbc6c06a1a080437cb558b8 in couchdb-fabric's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-fabric.git;h=3b15107 ] Allow limiting maximum document body size Update doc function to check and validate document body sizes Main implementation is in PR: https://github.com/apache/couchdb-couch/pull/235 COUCHDB-2992 > Add additional support for document size > > > Key: COUCHDB-2992 > URL: https://issues.apache.org/jira/browse/COUCHDB-2992 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Tony Sun > > Currently, only max_document_size of 64 GB is the restriction for users > creating documents. Large documents often leads to issues with our indexers. > This feature will allow users more finer grain control over document size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3287) Implement pluggable storage engines
[ https://issues.apache.org/jira/browse/COUCHDB-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955854#comment-15955854 ] ASF subversion and git services commented on COUCHDB-3287: -- Commit 6803aa03a940e68f41037de72b602ca1c1d3c5b0 in couchdb-fabric's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-fabric.git;h=6803aa0 ] Pass the storage engine option to RPC workers COUCHDB-3287 > Implement pluggable storage engines > --- > > Key: COUCHDB-3287 > URL: https://issues.apache.org/jira/browse/COUCHDB-3287 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > Opening branches for the pluggable storage engine work described here: > http://mail-archives.apache.org/mod_mbox/couchdb-dev/201606.mbox/%3CCAJ_m3YDjA9xym_JRVtd6Xi7LX7Ajwc6EmH_wyCRD1jgTzk8mKA%40mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3287) Implement pluggable storage engines
[ https://issues.apache.org/jira/browse/COUCHDB-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955853#comment-15955853 ] ASF subversion and git services commented on COUCHDB-3287: -- Commit dc266ff51b222489dd048cb96d30c90d3afebf85 in couchdb-fabric's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-fabric.git;h=dc266ff ] Update to use new pluggable storage API COUCHDB-3287 > Implement pluggable storage engines > --- > > Key: COUCHDB-3287 > URL: https://issues.apache.org/jira/browse/COUCHDB-3287 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > Opening branches for the pluggable storage engine work described here: > http://mail-archives.apache.org/mod_mbox/couchdb-dev/201606.mbox/%3CCAJ_m3YDjA9xym_JRVtd6Xi7LX7Ajwc6EmH_wyCRD1jgTzk8mKA%40mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3302) Attachment replication over low bandwidth network connections
[ https://issues.apache.org/jira/browse/COUCHDB-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955849#comment-15955849 ] ASF subversion and git services commented on COUCHDB-3302: -- Commit 6e9074bc8778e00471d96191319ac67d6c78c05a in couchdb-fabric's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-fabric.git;h=6e9074b ] Prevent attachment upload from timing out during update_docs fabric call Currently if an attachment was large enough or the connection was slow enough such that it took more than fabric.request_timeout = 6 milliseconds, the fabric request would time out during attachment data transfer from coordinator node to other nodes and the whole request would fail. This was most evident when replicating database with large attachments. The fix is to periodically send `attachment_chunk_received` to coordinator to prevent the timeout. COUCHDB-3302 > Attachment replication over low bandwidth network connections > - > > Key: COUCHDB-3302 > URL: https://issues.apache.org/jira/browse/COUCHDB-3302 > Project: CouchDB > Issue Type: Bug > Components: Replication >Reporter: Jan Lehnardt > Attachments: attach_large.py, replication-failure.log, > replication-failure-target.log > > > Setup: > Two CouchDB instances `source` (5981) and `target` (5983) with a 2MBit > network connection (simulated locally with traffic shaping, see way below for > an example). > {noformat} > git clone https://github.com/apache/couchdb.git > cd couchdb > ./configure --disable-docs --disable-fauxton > make release > cd .. > cp -r couchdb/rel/couchdb source > cp -r couchdb/rel/couchdb target > # set up local ini: chttpd / port: 5981 / 5983 > # set up vm.args: source@hostname.local / target@hostname.local > # no admins > Start both CouchDB in their own terminal windows: ./bin/couchdb > # create all required databases, and our `t` test database > curl -X PUT http://127.0.0.1:598{1,3}/{_users,_replicator,_global_changes,t} > # create 64MB attachments > dd if=/dev/urandom of=att-64 bs=1024 count=65536 > # create doc on source > curl -X PUT http://127.0.0.1:5981/t/doc1/att_64 -H 'Content-Type: > application/octet-stream' -d @att-64 > # replicate to target > curl -X POST http://127.0.0.1:5981/_replicate -Hcontent-type:application/json > -d '{"source":"http://127.0.0.1:5981/t","target":"http://127.0.0.1:5983/t"}' > {noformat} > With the traffic shaping in place, the replication call doesn’t return, and > eventually CouchDB fails with: > {noformat} > [error] 2017-02-16T17:37:30.488990Z source@hostname.local emulator > Error in process <0.15811.0> on node 'source@hostname.local' with exit value: > {{nocatch,{mp_parser_died,noproc}},[{couch_att,'-foldl/4-fun-0-',3,[{file,"src/couch_att.erl"},{line,591}]},{couch_att,fold_streamed_data,4,[{file,"src/couch_att.erl"},{line,642}]},{couch_att,foldl,4,[{file,"src/couch_att.erl"},{line,595}]},{couch_httpd_multipart,atts_to_mp,4,[{file,"src/couch_httpd_multipart.erl"},{line,208}]}]} > [error] 2017-02-16T17:37:30.490610Z source@hostname.local <0.8721.0> > Replicator, request PUT to "http://127.0.0.1:5983/t/doc1?new_edits=false"; > failed due to error {error, > {'EXIT', > {{{nocatch,{mp_parser_died,noproc}}, > [{couch_att,'-foldl/4-fun-0-',3, >[{file,"src/couch_att.erl"},{line,591}]}, >{couch_att,fold_streamed_data,4, >[{file,"src/couch_att.erl"},{line,642}]}, >{couch_att,foldl,4,[{file,"src/couch_att.erl"},{line,595}]}, >{couch_httpd_multipart,atts_to_mp,4, >[{file,"src/couch_httpd_multipart.erl"},{line,208}]}]}, > {gen_server,call, > [<0.15778.0>, > {send_req, > {{url,"http://127.0.0.1:5983/t/doc1?new_edits=false";, >"127.0.0.1",5983,undefined,undefined, >"/t/doc1?new_edits=false",http,ipv4_address}, >[{"Accept","application/json"}, > {"Content-Length",33194202}, > {"Content-Type", > "multipart/related; > boundary=\"0dea87076009b928b191e0b456375c93\""}, > {"User-Agent","CouchDB-Replicator/2.0.0"}], >put, >{#Fun, > > {<<"{\"_id\":\"doc1\",\"_rev\":\"1-15ae43c5b53de894b936c08db31d537c\",\"_revisions\":{\"start\":1,\"ids\":[\"15ae43c5b53de894b936c08db31d537c\"]},\"_attachments\":{\"att_64\":{\"content_type\":\"application/octet-stream\",\"revpos\":1,\"digest\":\"md5-s3AA0cYvwOzrSFTaALGh8g==\",\"length\":33193656,\"follows\":true}}}">>, > [{att,<<"att_64">>,<<"ap
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955852#comment-15955852 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit c1f15015f6a9ee984f70ea8853fd095b702261ec in couchdb-fabric's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-fabric.git;h=c1f1501 ] Remove public db record COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3113) fabric:open_revs can return {ok, []}
[ https://issues.apache.org/jira/browse/COUCHDB-3113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955848#comment-15955848 ] ASF subversion and git services commented on COUCHDB-3113: -- Commit 70535eeb9b9c226129bdc96cbca8492fbc867cf6 in couchdb-fabric's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~tonysun83] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-fabric.git;h=70535ee ] Use RealReplyCount to distinguish worker replies and invalid docs We use {ok, []} in couch_db:open_doc_revs_int/3 as a return value when the document does not exist and open_revs=all. This leads to an incorrect all_workers_died error. We use ReplyCount and RealReplyCount to distinguish between when no workers were actually used in a reply versus when the document does not exist COUCHDB-3113 > fabric:open_revs can return {ok, []} > > > Key: COUCHDB-3113 > URL: https://issues.apache.org/jira/browse/COUCHDB-3113 > Project: CouchDB > Issue Type: Bug >Reporter: ILYA > > According to typespec fabric:open_revs should return > - {ok, #doc{}} > - {{not_found,missing}, revision()} > However in the case when the coordinator receive rexi_EXIT from multiple > workers before the reply (for example when the worker crashes) the open_revs > reply becomes \{ok, []}. > This is due to the fact that we dispatch rexi_DOWN and rexi_EXIT recursively > to handle_message(\{ok, Replies} [see > here|https://github.com/apache/couchdb-fabric/blob/master/src/fabric_doc_open_revs.erl#L73 > clause]. Note that we set reply to be [] and worker to be nil. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3109) 500 when include_docs=true for linked documents
[ https://issues.apache.org/jira/browse/COUCHDB-3109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955847#comment-15955847 ] ASF subversion and git services commented on COUCHDB-3109: -- Commit cf220b2e927093e3bd6f409b4ca9f7b1be0a04a3 in couchdb-fabric's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~tonysun83] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-fabric.git;h=cf220b2 ] Add Else Clause For Embed Doc When open_doc or open_revs return an error, we set the doc value to be an error message. This way we account for errors rather than transform_row throwing a function_clause. COUCHDB-3109 > 500 when include_docs=true for linked documents > --- > > Key: COUCHDB-3109 > URL: https://issues.apache.org/jira/browse/COUCHDB-3109 > Project: CouchDB > Issue Type: Bug >Reporter: ILYA > > The problem happen when following conditions are satisfied: > - user uses [linked > documents|http://wiki.apache.org/couchdb/Introduction_to_CouchDB_views#Linked_documents] > feature i.e. view emits {_id: "other_doc_id"} > - query has include_docs=true > - one of the shards returns any error or times out > In this case we would have case_clause error either > - in [case > fabric:open_doc|https://github.com/apache/couchdb-fabric/blob/master/src/fabric_view.erl#L171] > - in [case > fabric:open_revs|https://github.com/apache/couchdb-fabric/blob/master/src/fabric_view.erl#L179] > This case_clause error propagates to > [transform_row|https://github.com/apache/couchdb-fabric/blob/master/src/fabric_view.erl#L132] > and fails there since transform_row doesn't have a clause to handle errors. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3113) fabric:open_revs can return {ok, []}
[ https://issues.apache.org/jira/browse/COUCHDB-3113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955839#comment-15955839 ] ASF subversion and git services commented on COUCHDB-3113: -- Commit dd02a3938f267716e3479b7162e0b0a4f8ba3d51 in couchdb-fabric's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~tonysun83] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-fabric.git;h=dd02a39 ] Return error when workers crash Currently, when one worker survives in fabric_open_revs, we return that as the response. However, when all workers crash, we still return {ok, []}. This changes the response to an error. COUCHDB-3113 > fabric:open_revs can return {ok, []} > > > Key: COUCHDB-3113 > URL: https://issues.apache.org/jira/browse/COUCHDB-3113 > Project: CouchDB > Issue Type: Bug >Reporter: ILYA > > According to typespec fabric:open_revs should return > - {ok, #doc{}} > - {{not_found,missing}, revision()} > However in the case when the coordinator receive rexi_EXIT from multiple > workers before the reply (for example when the worker crashes) the open_revs > reply becomes \{ok, []}. > This is due to the fact that we dispatch rexi_DOWN and rexi_EXIT recursively > to handle_message(\{ok, Replies} [see > here|https://github.com/apache/couchdb-fabric/blob/master/src/fabric_doc_open_revs.erl#L73 > clause]. Note that we set reply to be [] and worker to be nil. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2992) Add additional support for document size
[ https://issues.apache.org/jira/browse/COUCHDB-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955843#comment-15955843 ] ASF subversion and git services commented on COUCHDB-2992: -- Commit 3b15107df83a16a26dbc6c06a1a080437cb558b8 in couchdb-fabric's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-fabric.git;h=3b15107 ] Allow limiting maximum document body size Update doc function to check and validate document body sizes Main implementation is in PR: https://github.com/apache/couchdb-couch/pull/235 COUCHDB-2992 > Add additional support for document size > > > Key: COUCHDB-2992 > URL: https://issues.apache.org/jira/browse/COUCHDB-2992 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Tony Sun > > Currently, only max_document_size of 64 GB is the restriction for users > creating documents. Large documents often leads to issues with our indexers. > This feature will allow users more finer grain control over document size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955844#comment-15955844 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit c1f15015f6a9ee984f70ea8853fd095b702261ec in couchdb-fabric's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-fabric.git;h=c1f1501 ] Remove public db record COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3109) 500 when include_docs=true for linked documents
[ https://issues.apache.org/jira/browse/COUCHDB-3109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955840#comment-15955840 ] ASF subversion and git services commented on COUCHDB-3109: -- Commit cf220b2e927093e3bd6f409b4ca9f7b1be0a04a3 in couchdb-fabric's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~tonysun83] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-fabric.git;h=cf220b2 ] Add Else Clause For Embed Doc When open_doc or open_revs return an error, we set the doc value to be an error message. This way we account for errors rather than transform_row throwing a function_clause. COUCHDB-3109 > 500 when include_docs=true for linked documents > --- > > Key: COUCHDB-3109 > URL: https://issues.apache.org/jira/browse/COUCHDB-3109 > Project: CouchDB > Issue Type: Bug >Reporter: ILYA > > The problem happen when following conditions are satisfied: > - user uses [linked > documents|http://wiki.apache.org/couchdb/Introduction_to_CouchDB_views#Linked_documents] > feature i.e. view emits {_id: "other_doc_id"} > - query has include_docs=true > - one of the shards returns any error or times out > In this case we would have case_clause error either > - in [case > fabric:open_doc|https://github.com/apache/couchdb-fabric/blob/master/src/fabric_view.erl#L171] > - in [case > fabric:open_revs|https://github.com/apache/couchdb-fabric/blob/master/src/fabric_view.erl#L179] > This case_clause error propagates to > [transform_row|https://github.com/apache/couchdb-fabric/blob/master/src/fabric_view.erl#L132] > and fails there since transform_row doesn't have a clause to handle errors. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3113) fabric:open_revs can return {ok, []}
[ https://issues.apache.org/jira/browse/COUCHDB-3113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955841#comment-15955841 ] ASF subversion and git services commented on COUCHDB-3113: -- Commit 70535eeb9b9c226129bdc96cbca8492fbc867cf6 in couchdb-fabric's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~tonysun83] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-fabric.git;h=70535ee ] Use RealReplyCount to distinguish worker replies and invalid docs We use {ok, []} in couch_db:open_doc_revs_int/3 as a return value when the document does not exist and open_revs=all. This leads to an incorrect all_workers_died error. We use ReplyCount and RealReplyCount to distinguish between when no workers were actually used in a reply versus when the document does not exist COUCHDB-3113 > fabric:open_revs can return {ok, []} > > > Key: COUCHDB-3113 > URL: https://issues.apache.org/jira/browse/COUCHDB-3113 > Project: CouchDB > Issue Type: Bug >Reporter: ILYA > > According to typespec fabric:open_revs should return > - {ok, #doc{}} > - {{not_found,missing}, revision()} > However in the case when the coordinator receive rexi_EXIT from multiple > workers before the reply (for example when the worker crashes) the open_revs > reply becomes \{ok, []}. > This is due to the fact that we dispatch rexi_DOWN and rexi_EXIT recursively > to handle_message(\{ok, Replies} [see > here|https://github.com/apache/couchdb-fabric/blob/master/src/fabric_doc_open_revs.erl#L73 > clause]. Note that we set reply to be [] and worker to be nil. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3302) Attachment replication over low bandwidth network connections
[ https://issues.apache.org/jira/browse/COUCHDB-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955842#comment-15955842 ] ASF subversion and git services commented on COUCHDB-3302: -- Commit 6e9074bc8778e00471d96191319ac67d6c78c05a in couchdb-fabric's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-fabric.git;h=6e9074b ] Prevent attachment upload from timing out during update_docs fabric call Currently if an attachment was large enough or the connection was slow enough such that it took more than fabric.request_timeout = 6 milliseconds, the fabric request would time out during attachment data transfer from coordinator node to other nodes and the whole request would fail. This was most evident when replicating database with large attachments. The fix is to periodically send `attachment_chunk_received` to coordinator to prevent the timeout. COUCHDB-3302 > Attachment replication over low bandwidth network connections > - > > Key: COUCHDB-3302 > URL: https://issues.apache.org/jira/browse/COUCHDB-3302 > Project: CouchDB > Issue Type: Bug > Components: Replication >Reporter: Jan Lehnardt > Attachments: attach_large.py, replication-failure.log, > replication-failure-target.log > > > Setup: > Two CouchDB instances `source` (5981) and `target` (5983) with a 2MBit > network connection (simulated locally with traffic shaping, see way below for > an example). > {noformat} > git clone https://github.com/apache/couchdb.git > cd couchdb > ./configure --disable-docs --disable-fauxton > make release > cd .. > cp -r couchdb/rel/couchdb source > cp -r couchdb/rel/couchdb target > # set up local ini: chttpd / port: 5981 / 5983 > # set up vm.args: source@hostname.local / target@hostname.local > # no admins > Start both CouchDB in their own terminal windows: ./bin/couchdb > # create all required databases, and our `t` test database > curl -X PUT http://127.0.0.1:598{1,3}/{_users,_replicator,_global_changes,t} > # create 64MB attachments > dd if=/dev/urandom of=att-64 bs=1024 count=65536 > # create doc on source > curl -X PUT http://127.0.0.1:5981/t/doc1/att_64 -H 'Content-Type: > application/octet-stream' -d @att-64 > # replicate to target > curl -X POST http://127.0.0.1:5981/_replicate -Hcontent-type:application/json > -d '{"source":"http://127.0.0.1:5981/t","target":"http://127.0.0.1:5983/t"}' > {noformat} > With the traffic shaping in place, the replication call doesn’t return, and > eventually CouchDB fails with: > {noformat} > [error] 2017-02-16T17:37:30.488990Z source@hostname.local emulator > Error in process <0.15811.0> on node 'source@hostname.local' with exit value: > {{nocatch,{mp_parser_died,noproc}},[{couch_att,'-foldl/4-fun-0-',3,[{file,"src/couch_att.erl"},{line,591}]},{couch_att,fold_streamed_data,4,[{file,"src/couch_att.erl"},{line,642}]},{couch_att,foldl,4,[{file,"src/couch_att.erl"},{line,595}]},{couch_httpd_multipart,atts_to_mp,4,[{file,"src/couch_httpd_multipart.erl"},{line,208}]}]} > [error] 2017-02-16T17:37:30.490610Z source@hostname.local <0.8721.0> > Replicator, request PUT to "http://127.0.0.1:5983/t/doc1?new_edits=false"; > failed due to error {error, > {'EXIT', > {{{nocatch,{mp_parser_died,noproc}}, > [{couch_att,'-foldl/4-fun-0-',3, >[{file,"src/couch_att.erl"},{line,591}]}, >{couch_att,fold_streamed_data,4, >[{file,"src/couch_att.erl"},{line,642}]}, >{couch_att,foldl,4,[{file,"src/couch_att.erl"},{line,595}]}, >{couch_httpd_multipart,atts_to_mp,4, >[{file,"src/couch_httpd_multipart.erl"},{line,208}]}]}, > {gen_server,call, > [<0.15778.0>, > {send_req, > {{url,"http://127.0.0.1:5983/t/doc1?new_edits=false";, >"127.0.0.1",5983,undefined,undefined, >"/t/doc1?new_edits=false",http,ipv4_address}, >[{"Accept","application/json"}, > {"Content-Length",33194202}, > {"Content-Type", > "multipart/related; > boundary=\"0dea87076009b928b191e0b456375c93\""}, > {"User-Agent","CouchDB-Replicator/2.0.0"}], >put, >{#Fun, > > {<<"{\"_id\":\"doc1\",\"_rev\":\"1-15ae43c5b53de894b936c08db31d537c\",\"_revisions\":{\"start\":1,\"ids\":[\"15ae43c5b53de894b936c08db31d537c\"]},\"_attachments\":{\"att_64\":{\"content_type\":\"application/octet-stream\",\"revpos\":1,\"digest\":\"md5-s3AA0cYvwOzrSFTaALGh8g==\",\"length\":33193656,\"follows\":true}}}">>, > [{att,<<"att_64">>,<<"appl
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955832#comment-15955832 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit a8ac02d3a423ca5798018edb6bf3690b742cf94c in couchdb-couch-replicator's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-replicator.git;h=a8ac02d ] Remove public db record COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2964) Investigate switching replicator manager change feeds to using "normal" instead of "longpoll"
[ https://issues.apache.org/jira/browse/COUCHDB-2964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955835#comment-15955835 ] ASF subversion and git services commented on COUCHDB-2964: -- Commit d00b981445c03622497088eb872059ab4f48b298 in couchdb-couch-replicator's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-replicator.git;h=d00b981 ] Prevent replicator manager change feeds from getting stuck Switch them them from `longpoll` to `normal` This would prevent them being stuck. That could happen if more than one `resume_scan` message arrives for the same shard. The first time a longpoll changef feed would finish and end sequence is checkpointed. But if another resume_scan arrives and database hasn't changed then the longpoll change feed would hang until db is updated. The reason there would be multiple `resume_scan` messages is because there is a race condition between db update handler and scanner component. They are both started asynchronously roughly at the same. Scanner finds new shard while db handler notices changes for those shards. If shards are modified quickly after they are discovered by the scanner both of those components would issue a resume_scan. The effect of this would be more pronounced if there are a large number of _replicator shards and constant db creation/deletion/updates. COUCHDB-2964 > Investigate switching replicator manager change feeds to using "normal" > instead of "longpoll" > - > > Key: COUCHDB-2964 > URL: https://issues.apache.org/jira/browse/COUCHDB-2964 > Project: CouchDB > Issue Type: Improvement > Components: Replication >Reporter: Nick Vatamaniuc >Assignee: kzx > Fix For: 2.1 > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2992) Add additional support for document size
[ https://issues.apache.org/jira/browse/COUCHDB-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955834#comment-15955834 ] ASF subversion and git services commented on COUCHDB-2992: -- Commit 64958096d4f9a940c01cbc472da5265f349c9545 in couchdb-couch-replicator's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-replicator.git;h=6495809 ] Fix unit test after renaming max_document_size config parameter `couchdb.max_document_size` was renamed to `httpd.max_http_request_size` The unit tests was testing how replicator behaves when faced with reduced request size configuration on the target. COUCHDB-2992 > Add additional support for document size > > > Key: COUCHDB-2992 > URL: https://issues.apache.org/jira/browse/COUCHDB-2992 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Tony Sun > > Currently, only max_document_size of 64 GB is the restriction for users > creating documents. Large documents often leads to issues with our indexers. > This feature will allow users more finer grain control over document size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2992) Add additional support for document size
[ https://issues.apache.org/jira/browse/COUCHDB-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955833#comment-15955833 ] ASF subversion and git services commented on COUCHDB-2992: -- Commit 30915e3309fb30c2164e668d33dbd393e77925c0 in couchdb-couch-replicator's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-replicator.git;h=30915e3 ] Remove unused mp_parse_doc function from replicator It was left accidentally when merging Cloudant's dbcore work. COUCHDB-2992 > Add additional support for document size > > > Key: COUCHDB-2992 > URL: https://issues.apache.org/jira/browse/COUCHDB-2992 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Tony Sun > > Currently, only max_document_size of 64 GB is the restriction for users > creating documents. Large documents often leads to issues with our indexers. > This feature will allow users more finer grain control over document size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3287) Implement pluggable storage engines
[ https://issues.apache.org/jira/browse/COUCHDB-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955837#comment-15955837 ] ASF subversion and git services commented on COUCHDB-3287: -- Commit 4e45eab609aede8f17ff44b89044c34b0f4ab6a1 in couchdb-couch-replicator's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-replicator.git;h=4e45eab ] Update tests to use pluggable storage engine API COUCHDB-3287 > Implement pluggable storage engines > --- > > Key: COUCHDB-3287 > URL: https://issues.apache.org/jira/browse/COUCHDB-3287 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > Opening branches for the pluggable storage engine work described here: > http://mail-archives.apache.org/mod_mbox/couchdb-dev/201606.mbox/%3CCAJ_m3YDjA9xym_JRVtd6Xi7LX7Ajwc6EmH_wyCRD1jgTzk8mKA%40mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955836#comment-15955836 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit a8ac02d3a423ca5798018edb6bf3690b742cf94c in couchdb-couch-replicator's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-replicator.git;h=a8ac02d ] Remove public db record COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2964) Investigate switching replicator manager change feeds to using "normal" instead of "longpoll"
[ https://issues.apache.org/jira/browse/COUCHDB-2964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955831#comment-15955831 ] ASF subversion and git services commented on COUCHDB-2964: -- Commit d00b981445c03622497088eb872059ab4f48b298 in couchdb-couch-replicator's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-replicator.git;h=d00b981 ] Prevent replicator manager change feeds from getting stuck Switch them them from `longpoll` to `normal` This would prevent them being stuck. That could happen if more than one `resume_scan` message arrives for the same shard. The first time a longpoll changef feed would finish and end sequence is checkpointed. But if another resume_scan arrives and database hasn't changed then the longpoll change feed would hang until db is updated. The reason there would be multiple `resume_scan` messages is because there is a race condition between db update handler and scanner component. They are both started asynchronously roughly at the same. Scanner finds new shard while db handler notices changes for those shards. If shards are modified quickly after they are discovered by the scanner both of those components would issue a resume_scan. The effect of this would be more pronounced if there are a large number of _replicator shards and constant db creation/deletion/updates. COUCHDB-2964 > Investigate switching replicator manager change feeds to using "normal" > instead of "longpoll" > - > > Key: COUCHDB-2964 > URL: https://issues.apache.org/jira/browse/COUCHDB-2964 > Project: CouchDB > Issue Type: Improvement > Components: Replication >Reporter: Nick Vatamaniuc >Assignee: kzx > Fix For: 2.1 > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2992) Add additional support for document size
[ https://issues.apache.org/jira/browse/COUCHDB-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955830#comment-15955830 ] ASF subversion and git services commented on COUCHDB-2992: -- Commit 64958096d4f9a940c01cbc472da5265f349c9545 in couchdb-couch-replicator's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-replicator.git;h=6495809 ] Fix unit test after renaming max_document_size config parameter `couchdb.max_document_size` was renamed to `httpd.max_http_request_size` The unit tests was testing how replicator behaves when faced with reduced request size configuration on the target. COUCHDB-2992 > Add additional support for document size > > > Key: COUCHDB-2992 > URL: https://issues.apache.org/jira/browse/COUCHDB-2992 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Tony Sun > > Currently, only max_document_size of 64 GB is the restriction for users > creating documents. Large documents often leads to issues with our indexers. > This feature will allow users more finer grain control over document size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3316) Log replicator db name not just doc ids
[ https://issues.apache.org/jira/browse/COUCHDB-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955828#comment-15955828 ] ASF subversion and git services commented on COUCHDB-3316: -- Commit 50dcd7d7c5f7ce003e8e2fc84646c1aa9931ebaa in couchdb-couch-replicator's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-replicator.git;h=50dcd7d ] Make sure to log db as well as doc in replicator logs. COUCHDB-3316 > Log replicator db name not just doc ids > --- > > Key: COUCHDB-3316 > URL: https://issues.apache.org/jira/browse/COUCHDB-3316 > Project: CouchDB > Issue Type: Improvement >Reporter: Nick Vatamaniuc > > Currently replicator logs only doc_id. However in 2.0 there could be more > than one _replicator db. So for logs to be useful it would be nice to log the > db as well. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2992) Add additional support for document size
[ https://issues.apache.org/jira/browse/COUCHDB-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955829#comment-15955829 ] ASF subversion and git services commented on COUCHDB-2992: -- Commit 30915e3309fb30c2164e668d33dbd393e77925c0 in couchdb-couch-replicator's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-replicator.git;h=30915e3 ] Remove unused mp_parse_doc function from replicator It was left accidentally when merging Cloudant's dbcore work. COUCHDB-2992 > Add additional support for document size > > > Key: COUCHDB-2992 > URL: https://issues.apache.org/jira/browse/COUCHDB-2992 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Tony Sun > > Currently, only max_document_size of 64 GB is the restriction for users > creating documents. Large documents often leads to issues with our indexers. > This feature will allow users more finer grain control over document size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3287) Implement pluggable storage engines
[ https://issues.apache.org/jira/browse/COUCHDB-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955825#comment-15955825 ] ASF subversion and git services commented on COUCHDB-3287: -- Commit e7932f7287548c7f084e814954761ee5a9188bab in couchdb-couch-mrview's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-mrview.git;h=e7932f7 ] Update to use pluggable storage API COUCHDB-3287 > Implement pluggable storage engines > --- > > Key: COUCHDB-3287 > URL: https://issues.apache.org/jira/browse/COUCHDB-3287 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > Opening branches for the pluggable storage engine work described here: > http://mail-archives.apache.org/mod_mbox/couchdb-dev/201606.mbox/%3CCAJ_m3YDjA9xym_JRVtd6Xi7LX7Ajwc6EmH_wyCRD1jgTzk8mKA%40mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955824#comment-15955824 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit 66274de217f64b167c1bdfe0c6a6fc211065fb12 in couchdb-couch-mrview's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-mrview.git;h=66274de ] Remove public db record COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2992) Add additional support for document size
[ https://issues.apache.org/jira/browse/COUCHDB-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955823#comment-15955823 ] ASF subversion and git services commented on COUCHDB-2992: -- Commit 398c30e8785c3cd880d7d9788d25810dfe626c18 in couchdb-couch-mrview's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-mrview.git;h=398c30e ] Allow limiting maximum document body size This is a companion commit to this one: https://github.com/apache/couchdb-couch/pull/235 COUCHDB-2992 > Add additional support for document size > > > Key: COUCHDB-2992 > URL: https://issues.apache.org/jira/browse/COUCHDB-2992 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Tony Sun > > Currently, only max_document_size of 64 GB is the restriction for users > creating documents. Large documents often leads to issues with our indexers. > This feature will allow users more finer grain control over document size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955809#comment-15955809 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit 66274de217f64b167c1bdfe0c6a6fc211065fb12 in couchdb-couch-mrview's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-mrview.git;h=66274de ] Remove public db record COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2992) Add additional support for document size
[ https://issues.apache.org/jira/browse/COUCHDB-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955808#comment-15955808 ] ASF subversion and git services commented on COUCHDB-2992: -- Commit 398c30e8785c3cd880d7d9788d25810dfe626c18 in couchdb-couch-mrview's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-mrview.git;h=398c30e ] Allow limiting maximum document body size This is a companion commit to this one: https://github.com/apache/couchdb-couch/pull/235 COUCHDB-2992 > Add additional support for document size > > > Key: COUCHDB-2992 > URL: https://issues.apache.org/jira/browse/COUCHDB-2992 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Tony Sun > > Currently, only max_document_size of 64 GB is the restriction for users > creating documents. Large documents often leads to issues with our indexers. > This feature will allow users more finer grain control over document size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3287) Implement pluggable storage engines
[ https://issues.apache.org/jira/browse/COUCHDB-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955806#comment-15955806 ] ASF subversion and git services commented on COUCHDB-3287: -- Commit e70ca89921f73522770f769b2a1e606fcae51eb0 in couchdb-chttpd's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-chttpd.git;h=e70ca89 ] Support engine selection from the HTTP API COUCHDB-3287 > Implement pluggable storage engines > --- > > Key: COUCHDB-3287 > URL: https://issues.apache.org/jira/browse/COUCHDB-3287 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > Opening branches for the pluggable storage engine work described here: > http://mail-archives.apache.org/mod_mbox/couchdb-dev/201606.mbox/%3CCAJ_m3YDjA9xym_JRVtd6Xi7LX7Ajwc6EmH_wyCRD1jgTzk8mKA%40mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2992) Add additional support for document size
[ https://issues.apache.org/jira/browse/COUCHDB-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955803#comment-15955803 ] ASF subversion and git services commented on COUCHDB-2992: -- Commit a1470e3bdbcb4b98d9cc7f5dc3641a2b008df16b in couchdb-chttpd's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-chttpd.git;h=a1470e3 ] Rename max_document_size to max_http_request_size `max_document_size` is implemented as `max_http_request_size`. There was no real check for document size. In some cases the implementation was close enough of a proxy (PUT-ing and GET-ing single docs), but in some edge cases, like _bulk_docs requests the discrepancy between request size and document size could be rather large. The section was changed accordingly from `couchdb` to `httpd`. `httpd` was chosen as it applies to both clustered as well as local interface. There is a parallel effort to implement an actual max_document_size check. The set of commit should be merged close enough together to allow for a backwards compatible transition. COUCHDB-2992 > Add additional support for document size > > > Key: COUCHDB-2992 > URL: https://issues.apache.org/jira/browse/COUCHDB-2992 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Tony Sun > > Currently, only max_document_size of 64 GB is the restriction for users > creating documents. Large documents often leads to issues with our indexers. > This feature will allow users more finer grain control over document size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955802#comment-15955802 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit 04d26cc72cf2b3334e1796e48955e8cd79488484 in couchdb-chttpd's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-chttpd.git;h=04d26cc ] Remove public db record COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2992) Add additional support for document size
[ https://issues.apache.org/jira/browse/COUCHDB-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955800#comment-15955800 ] ASF subversion and git services commented on COUCHDB-2992: -- Commit a1470e3bdbcb4b98d9cc7f5dc3641a2b008df16b in couchdb-chttpd's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-chttpd.git;h=a1470e3 ] Rename max_document_size to max_http_request_size `max_document_size` is implemented as `max_http_request_size`. There was no real check for document size. In some cases the implementation was close enough of a proxy (PUT-ing and GET-ing single docs), but in some edge cases, like _bulk_docs requests the discrepancy between request size and document size could be rather large. The section was changed accordingly from `couchdb` to `httpd`. `httpd` was chosen as it applies to both clustered as well as local interface. There is a parallel effort to implement an actual max_document_size check. The set of commit should be merged close enough together to allow for a backwards compatible transition. COUCHDB-2992 > Add additional support for document size > > > Key: COUCHDB-2992 > URL: https://issues.apache.org/jira/browse/COUCHDB-2992 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Tony Sun > > Currently, only max_document_size of 64 GB is the restriction for users > creating documents. Large documents often leads to issues with our indexers. > This feature will allow users more finer grain control over document size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2992) Add additional support for document size
[ https://issues.apache.org/jira/browse/COUCHDB-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955804#comment-15955804 ] ASF subversion and git services commented on COUCHDB-2992: -- Commit d1848e6f2288ea9b3758c22f10f75706a87be3b5 in couchdb-chttpd's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-chttpd.git;h=d1848e6 ] Allow limiting maximum document body size This is the HTTP layer and some tests. The actual checking is done in couch application's from_json_obj/1 function. If a document is too large it will return a 413 response code. The error reason will be the document ID. The intent is to help users identify the document if they used _bulk_docs endpoint. It will also help replicator skip over documents which are too large. COUCHDB-2992 > Add additional support for document size > > > Key: COUCHDB-2992 > URL: https://issues.apache.org/jira/browse/COUCHDB-2992 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Tony Sun > > Currently, only max_document_size of 64 GB is the restriction for users > creating documents. Large documents often leads to issues with our indexers. > This feature will allow users more finer grain control over document size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3288) Remove public db record
[ https://issues.apache.org/jira/browse/COUCHDB-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955805#comment-15955805 ] ASF subversion and git services commented on COUCHDB-3288: -- Commit 04d26cc72cf2b3334e1796e48955e8cd79488484 in couchdb-chttpd's branch refs/heads/COUCHDB-3287-pluggable-storage-engines from [~paul.joseph.davis] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-chttpd.git;h=04d26cc ] Remove public db record COUCHDB-3288 > Remove public db record > --- > > Key: COUCHDB-3288 > URL: https://issues.apache.org/jira/browse/COUCHDB-3288 > Project: CouchDB > Issue Type: Improvement >Reporter: Paul Joseph Davis > > To enable a mixed cluster upgrade (i.e., rolling reboot upgrade) we need to > do some preparatory work to remove access to the #db{} record since this > record is shared between nodes. > This work is all straight forward and just involves changing things like > Db#db.main_pid to couch_db:get_main_pid(Db) or similar. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-2992) Add additional support for document size
[ https://issues.apache.org/jira/browse/COUCHDB-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955801#comment-15955801 ] ASF subversion and git services commented on COUCHDB-2992: -- Commit d1848e6f2288ea9b3758c22f10f75706a87be3b5 in couchdb-chttpd's branch refs/heads/COUCHDB-3288-remove-public-db-record from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb-chttpd.git;h=d1848e6 ] Allow limiting maximum document body size This is the HTTP layer and some tests. The actual checking is done in couch application's from_json_obj/1 function. If a document is too large it will return a 413 response code. The error reason will be the document ID. The intent is to help users identify the document if they used _bulk_docs endpoint. It will also help replicator skip over documents which are too large. COUCHDB-2992 > Add additional support for document size > > > Key: COUCHDB-2992 > URL: https://issues.apache.org/jira/browse/COUCHDB-2992 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Tony Sun > > Currently, only max_document_size of 64 GB is the restriction for users > creating documents. Large documents often leads to issues with our indexers. > This feature will allow users more finer grain control over document size. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3324) Scheduling Replicator
[ https://issues.apache.org/jira/browse/COUCHDB-3324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955636#comment-15955636 ] ASF subversion and git services commented on COUCHDB-3324: -- Commit 382d4880514bc63e1c07cfa810a76485917c0bec in couchdb's branch refs/heads/63012-scheduler from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb.git;h=382d488 ] Stitch scheduling replicator together. Glue together all the scheduling replicator pieces. Scheduler is the main component. It can run a large number of replication jobs by switching between them, stopping and starting some periodically. Jobs which fail are backed off exponentially. Normal (non-continuous) jobs will be allowed to run to completion to preserve their current semantics. Scheduler behavior can configured by these configuration options in `[replicator]` sections: * `max_jobs` : Number of actively running replications. Making this too high could cause performance issues. Making it too low could mean replications jobs might not have enough time to make progress before getting unscheduled again. This parameter can be adjusted at runtime and will take effect during next reschudling cycle. * `interval` : Scheduling interval in milliseconds. During each reschedule cycle scheduler might start or stop up to "max_churn" number of jobs. * `max_churn` : Maximum number of replications to start and stop during rescheduling. This parameter along with "interval" defines the rate of job replacement. During startup, however a much larger number of jobs could be started (up to max_jobs) in short period of time. Replication jobs are added to the scheduler by the document processor or from the `couch_replicator:replicate/2` function when called from `_replicate` HTTP endpoint handler. Document processor listens for updates via couch_mutlidb_changes module then tries to add replication jobs to the scheduler. Sometimes translating a document update to a replication job could fail, either permantly (if document is malformed and missing some expected fields for example) or temporarily if it is a filtered replication and filter cannot be fetched. A failed filter fetch will be retried with an exponential backoff. couch_replicator_clustering is in charge of monitoring cluster membership changes. When membership changes, after a configurable quiet period, a rescan will be initiated. Rescan will shufle replication jobs to make sure a replication job is running on only one node. A new set of stats were added to introspect scheduler and doc processor internals. The top replication supervisor structure is `rest_for_one`. This means if a child crashes, all children to the "right" of it will be restarted (if visualized supervisor hierarchy as an upside-down tree). Clustering, connection pool and rate limiter are towards the "left" as they are more fundamental, if clustering child crashes, most other components will be restart. Doc process or and multi-db changes children are towards the "right". If they crash, they can be safely restarted without affecting already running replication or components like clustering or connection pool. Jira: COUCHDB-3324 > Scheduling Replicator > - > > Key: COUCHDB-3324 > URL: https://issues.apache.org/jira/browse/COUCHDB-3324 > Project: CouchDB > Issue Type: New Feature >Reporter: Nick Vatamaniuc > > Merge scheduling replicator -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (COUCHDB-3324) Scheduling Replicator
[ https://issues.apache.org/jira/browse/COUCHDB-3324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955441#comment-15955441 ] ASF subversion and git services commented on COUCHDB-3324: -- Commit 4f9168422919ea6c3fafdb41212efd2e9da7e280 in couchdb's branch refs/heads/63012-scheduler from [~sagelywizard] [ https://git-wip-us.apache.org/repos/asf?p=couchdb.git;h=4f91684 ] Add `_scheduler/{jobs,docs}` API endpoints The `_scheduler/docs` endpoint provides a view of all replicator docs which have been seen by the scheduler. This endpoint includes useful information such as the state of the replication and the coordinator node. The `_scheduler/jobs` endpoint provides a view of all replications managed by the scheduler. This endpoint includes more information on the replication than the `_scheduler/docs` endpoint, including the history of state transitions of the replication. Jira: COUCHDB-3324 > Scheduling Replicator > - > > Key: COUCHDB-3324 > URL: https://issues.apache.org/jira/browse/COUCHDB-3324 > Project: CouchDB > Issue Type: New Feature >Reporter: Nick Vatamaniuc > > Merge scheduling replicator -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] davisp commented on a change in pull request #469: Choose index based on fields match
davisp commented on a change in pull request #469: Choose index based on fields match URL: https://github.com/apache/couchdb/pull/469#discussion_r109720398 ## File path: src/mango/src/mango_cursor_view.erl ## @@ -107,10 +107,14 @@ execute(#cursor{db = Db, index = Idx} = Cursor0, UserFun, UserAcc) -> % check FieldRanges for a, b, c, and d and return % the longest prefix of columns found. composite_indexes(Indexes, FieldRanges) -> -lists:foldl(fun(Idx, Acc) -> +FieldKeys = [Key || {Key, _} <- FieldRanges], +SortedIndexes = lists:foldl(fun(Idx, Acc) -> Cols = mango_idx:columns(Idx), Prefix = composite_prefix(Cols, FieldRanges), -[{Idx, Prefix} | Acc] +% create a score based on how close the number of fields +% the index has to the number of fields in the selector +Score = length(Cols) - length(FieldKeys), Review comment: To expand on this, there are three things we're contemplating here: 1. Number of {Field, Range} pairs in the selector. 2. Number of columns in the index 3. The shared prefix between those two. The important case to remember here is that the selector may have more fields than are being satisfied by the index. Ie, a selector may only use the first one or two out of a three column index. Which means that if you have a selector that has more fields than the index has columns it ends up having a negative score which seems odd. However, what we're really caring about here is that we're being the least specific once we have some part of an index satisfied. Its bit odd given that we also want the longest prefix. So the sort in words is basically something like: I want to use the index with the most matching fields to my selector, but with the least extra fields not used by my selector (ie, the smallest length(Cols) - length(Prefix)). Though that leaves us with the same current connundrum when we have two indexes with the same number of columns but sharing a common prefix with the selector. (the [a, b, c], [a, d, e] example I mentioned). For that case I'd still change it away from the docid sort since that's rather ambiguous and generally not supplied by the user and as such is essentially random (ie, its a hash which is hard to predict, its not uuid random though (if memory serves)). Anyway, so I'd use the length preference you've got and then break ties with the field name past the end of the sort and then I guess fall back to docid if we have two indices that have the same set of columns cause it shouldn't matter? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (COUCHDB-3324) Scheduling Replicator
[ https://issues.apache.org/jira/browse/COUCHDB-3324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955422#comment-15955422 ] ASF subversion and git services commented on COUCHDB-3324: -- Commit aecdad9de619cfffa03ab591c0e3c939c7b2b6e5 in couchdb's branch refs/heads/63012-scheduler from [~vatamane] [ https://git-wip-us.apache.org/repos/asf?p=couchdb.git;h=aecdad9 ] Stitch scheduling replicator together. Glue together all the scheduling replicator pieces. Scheduler is the main component. It can run a large number of replication jobs by switching between them, stopping and starting some periodically. Jobs which fail are backed off exponentially. Normal (non-continuous) jobs will be allowed to run to completion to preserve their current semantics. Scheduler behavior can configured by these configuration options in `[replicator]` sections: * `max_jobs` : Number of actively running replications. Making this too high could cause performance issues. Making it too low could mean replications jobs might not have enough time to make progress before getting unscheduled again. This parameter can be adjusted at runtime and will take effect during next reschudling cycle. * `interval` : Scheduling interval in milliseconds. During each reschedule cycle scheduler might start or stop up to "max_churn" number of jobs. * `max_churn` : Maximum number of replications to start and stop during rescheduling. This parameter along with "interval" defines the rate of job replacement. During startup, however a much larger number of jobs could be started (up to max_jobs) in short period of time. Replication jobs are added to the scheduler by the document processor or from the `couch_replicator:replicate/2` function when called from `_replicate` HTTP endpoint handler. Document processor listens for updates via couch_mutlidb_changes module then tries to add replication jobs to the scheduler. Sometimes translating a document update to a replication job could fail, either permantly (if document is malformed and missing some expected fields for example) or temporarily if it is a filtered replication and filter cannot be fetched. A failed filter fetch will be retried with an exponential backoff. couch_replicator_clustering is in charge of monitoring cluster membership changes. When membership changes, after a configurable quiet period, a rescan will be initiated. Rescan will shufle replication jobs to make sure a replication job is running on only one node. A new set of stats were added to introspect scheduler and doc processor internals. The top replication supervisor structure is `rest_for_one`. This means if a child crashes, all children to the "right" of it will be restarted (if visualized supervisor hierarchy as an upside-down tree). Clustering, connection pool and rate limiter are towards the "left" as they are more fundamental, if clustering child crashes, most other components will be restart. Doc process or and multi-db changes children are towards the "right". If they crash, they can be safely restarted without affecting already running replication or components like clustering or connection pool. Jira: COUCHDB-3324 > Scheduling Replicator > - > > Key: COUCHDB-3324 > URL: https://issues.apache.org/jira/browse/COUCHDB-3324 > Project: CouchDB > Issue Type: New Feature >Reporter: Nick Vatamaniuc > > Merge scheduling replicator -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] davisp commented on a change in pull request #469: Choose index based on fields match
davisp commented on a change in pull request #469: Choose index based on fields match URL: https://github.com/apache/couchdb/pull/469#discussion_r109708390 ## File path: src/mango/src/mango_cursor_view.erl ## @@ -270,4 +276,4 @@ is_design_doc(RowProps) -> case couch_util:get_value(id, RowProps) of <<"_design/", _/binary>> -> true; _ -> false -end. Review comment: Another unrelated change we should try to avoid. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] davisp commented on a change in pull request #469: Choose index based on fields match
davisp commented on a change in pull request #469: Choose index based on fields match URL: https://github.com/apache/couchdb/pull/469#discussion_r109708337 ## File path: src/mango/src/mango_cursor_view.erl ## @@ -135,14 +139,16 @@ composite_prefix([Col | Rest], Ranges) -> % reduce view read on each index with the ranges to find % the one that has the fewest number of rows or something. choose_best_index(_DbName, IndexRanges) -> -Cmp = fun({A1, A2}, {B1, B2}) -> -case length(A2) - length(B2) of +Cmp = fun({IdxA, PrefixA, ScoreA}, {IdxB, PrefixB, ScoreB}) -> +case length(PrefixA) - length(PrefixB) of N when N < 0 -> true; N when N == 0 -> % This is a really bad sort and will end % up preferring indices based on the % (dbname, ddocid, view_name) triple -A1 =< B1; +%IdxA =< IdxB; +%prefer using the index with the lower score +ScoreA =< ScoreB; Review comment: Score is a bit opaque. I'd just call it length and say that we prefer using the index with the least number of fields. Also I wouldn't bother calculating it in composite indices just to use it here and then have to ignore the return value. You've got all the info here so I'd just calclulate it on the fly. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] davisp commented on a change in pull request #469: Choose index based on fields match
davisp commented on a change in pull request #469: Choose index based on fields match URL: https://github.com/apache/couchdb/pull/469#discussion_r109704334 ## File path: src/mango/rebar.config.script ## @@ -12,16 +12,16 @@ DreyfusAppFile = filename:join(filename:dirname(SCRIPT), "../dreyfus/src/dreyfus.app.src"), -RenameFile = filename:join(filename:dirname(SCRIPT), -"src/mango_cursor_text.erl"), Review comment: Did you forget a rebase? This looks like its reverting an unrelated change. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] davisp commented on a change in pull request #469: Choose index based on fields match
davisp commented on a change in pull request #469: Choose index based on fields match URL: https://github.com/apache/couchdb/pull/469#discussion_r109705681 ## File path: src/mango/src/mango_cursor_view.erl ## @@ -135,14 +139,16 @@ composite_prefix([Col | Rest], Ranges) -> % reduce view read on each index with the ranges to find % the one that has the fewest number of rows or something. choose_best_index(_DbName, IndexRanges) -> -Cmp = fun({A1, A2}, {B1, B2}) -> -case length(A2) - length(B2) of +Cmp = fun({IdxA, PrefixA, ScoreA}, {IdxB, PrefixB, ScoreB}) -> +case length(PrefixA) - length(PrefixB) of N when N < 0 -> true; N when N == 0 -> % This is a really bad sort and will end % up preferring indices based on the % (dbname, ddocid, view_name) triple Review comment: You'll want to remove this comment since its no longer relevant. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] davisp commented on a change in pull request #469: Choose index based on fields match
davisp commented on a change in pull request #469: Choose index based on fields match URL: https://github.com/apache/couchdb/pull/469#discussion_r109705195 ## File path: src/mango/rebar.config.script ## @@ -12,16 +12,16 @@ DreyfusAppFile = filename:join(filename:dirname(SCRIPT), "../dreyfus/src/dreyfus.app.src"), -RenameFile = filename:join(filename:dirname(SCRIPT), -"src/mango_cursor_text.erl"), Review comment: And there's a weird breakage down below with a file rename that looks not right to me. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] davisp commented on a change in pull request #469: Choose index based on fields match
davisp commented on a change in pull request #469: Choose index based on fields match URL: https://github.com/apache/couchdb/pull/469#discussion_r109710384 ## File path: src/mango/src/mango_cursor_view.erl ## @@ -107,10 +107,14 @@ execute(#cursor{db = Db, index = Idx} = Cursor0, UserFun, UserAcc) -> % check FieldRanges for a, b, c, and d and return % the longest prefix of columns found. composite_indexes(Indexes, FieldRanges) -> -lists:foldl(fun(Idx, Acc) -> +FieldKeys = [Key || {Key, _} <- FieldRanges], +SortedIndexes = lists:foldl(fun(Idx, Acc) -> Cols = mango_idx:columns(Idx), Prefix = composite_prefix(Cols, FieldRanges), -[{Idx, Prefix} | Acc] +% create a score based on how close the number of fields +% the index has to the number of fields in the selector +Score = length(Cols) - length(FieldKeys), Review comment: Seems like we should be looking at the length of the prefix, no? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] garrensmith commented on issue #469: Choose index based on fields match
garrensmith commented on issue #469: Choose index based on fields match URL: https://github.com/apache/couchdb/pull/469#issuecomment-291537866 I ran the pouchdb-find tests locally against this fix and all tests passed. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (COUCHDB-3357) Improve the way the index is chosen
[ https://issues.apache.org/jira/browse/COUCHDB-3357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955237#comment-15955237 ] Garren Smith commented on COUCHDB-3357: --- A very basic first attempt at a solution https://github.com/apache/couchdb/pull/469 > Improve the way the index is chosen > --- > > Key: COUCHDB-3357 > URL: https://issues.apache.org/jira/browse/COUCHDB-3357 > Project: CouchDB > Issue Type: Improvement > Components: Mango >Reporter: Garren Smith > > Currently if two or more indexes are able to be used for a query the > choose_best_index can get in a position where it will chose the index based > based on sort order of its dbname, ddocid. This isn't ideal. > If we have two docs like this: > doc1 = { >name: "Mary" > }; > doc2 = { >name: "Mary", >role: "Ceo" > }; > If we create two indexes: > Index 1 = fields: ['name', 'role'] > Index 2 = fields: ['name'] > And if we create a query like this: > selector: { >name: 'Mary' > }; > If index 1 has a ddocId higher in the alphabet e.g A and Index 2 has a lower > ddocId like Z. Then Index will be selected which means that doc 1 will be > excluded. > An example of a test case can be found here > https://github.com/apache/couchdb/blob/7951c8ae498e372d0db19887c6e39a91885df4e1/src/mango/test/12-use-correct-index.py -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (COUCHDB-3357) Improve the way the index is chosen
Garren Smith created COUCHDB-3357: - Summary: Improve the way the index is chosen Key: COUCHDB-3357 URL: https://issues.apache.org/jira/browse/COUCHDB-3357 Project: CouchDB Issue Type: Improvement Components: Mango Reporter: Garren Smith Currently if two or more indexes are able to be used for a query the choose_best_index can get in a position where it will chose the index based based on sort order of its dbname, ddocid. This isn't ideal. If we have two docs like this: doc1 = { name: "Mary" }; doc2 = { name: "Mary", role: "Ceo" }; If we create two indexes: Index 1 = fields: ['name', 'role'] Index 2 = fields: ['name'] And if we create a query like this: selector: { name: 'Mary' }; If index 1 has a ddocId higher in the alphabet e.g A and Index 2 has a lower ddocId like Z. Then Index will be selected which means that doc 1 will be excluded. An example of a test case can be found here https://github.com/apache/couchdb/blob/7951c8ae498e372d0db19887c6e39a91885df4e1/src/mango/test/12-use-correct-index.py -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] garrensmith opened a new pull request #469: Choose index based on fields match
garrensmith opened a new pull request #469: Choose index based on fields match URL: https://github.com/apache/couchdb/pull/469 ## Overview If two indexes can be used, then instead of choosing the index based on alphabet order of index names, rather choose based on a score. This score is calculated by determining which index has the least number of fields ## Testing recommendations There is a test that proves the issue and passes based on this fix. ## JIRA issue number ## Checklist - [ ] Code is written and works correctly; - [ ] Changes are covered by tests; - [ ] Documentation reflects the changes; - [ ] I will not forget to update [rebar.config.script](https://github.com/apache/couchdb/blob/master/rebar.config.script) with the correct commit hash once this PR get merged. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] nickva commented on issue #120: Update docs for PUT attachments
nickva commented on issue #120: Update docs for PUT attachments URL: https://github.com/apache/couchdb-documentation/pull/120#issuecomment-291527478 Good find! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] flimzy opened a new pull request #121: Update docs for PUT attachments
flimzy opened a new pull request #121: Update docs for PUT attachments URL: https://github.com/apache/couchdb-documentation/pull/121 These headers were completely misdocumented; the result of an apparent copy-and-paste from the GET headers documentation. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] flimzy commented on issue #120: Update docs for PUT attachments
flimzy commented on issue #120: Update docs for PUT attachments URL: https://github.com/apache/couchdb-documentation/pull/120#issuecomment-291521922 From [RFC 7233, 3.1](https://tools.ietf.org/html/rfc7233): > A server MUST ignore a Range header field received with a request method other than GET. By extension, it seems logical that advertising `Accept-Ranges` for anything other than GET (and by extension, HEAD), doesn't make sense. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] nickva commented on issue #120: Update docs for PUT attachments
nickva commented on issue #120: Update docs for PUT attachments URL: https://github.com/apache/couchdb-documentation/pull/120#issuecomment-291516030 @flimzy Ah I see. To be fair, I noticed, I messed up as well as I didn't check HEAD by only GET. I should check HEAD explicitly. Yeah I imagine Accept-Ranges might not apply to PUT but will need to verify it. As for 1.6 try to checkout 1.6.x branch then look in share/doc folder. I imagine making a PR then to that 1.6.x branch. But I am not too familiar with that process there TBH. @wohali, you did some documentation work, do you have any guidance on updating 1.6.x documentation and how it differs from 2.0 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] nickva commented on issue #120: Update docs for PUT attachments
nickva commented on issue #120: Update docs for PUT attachments URL: https://github.com/apache/couchdb-documentation/pull/120#issuecomment-291516030 @flimzy Ah I see. To be fair, I noticed, I messed up as well as I didn't check HEAD by only GET. I should check HEAD explicitly. Yeah I imagine Accept-Ranges might not apply to PUT but will need to verify it. As for 1.6 try to checkout 1.6.x branch then look in share/doc folder. I imagine making a PR then to that 1.6.x branch. But I am not too familiar with that process there TBH. @wohali, you did some documentation work, do you have any guidance on updating 1.6.x documentation and how it differs from 2.0 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] flimzy commented on issue #120: Update docs for PUT attachments
flimzy commented on issue #120: Update docs for PUT attachments URL: https://github.com/apache/couchdb-documentation/pull/120#issuecomment-291514033 I expect `Accept-Ranges` is also invalid for PUT, but wasn't entirely sure, so left it alone. I'd also love to update this for the 1.6 docs. Do we have a way to update historical documentation? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] flimzy commented on issue #120: Update docs for PUT attachments
flimzy commented on issue #120: Update docs for PUT attachments URL: https://github.com/apache/couchdb-documentation/pull/120#issuecomment-291514033 I expect `Accept-Ranges` is also invalid for PUT, but wasn't entirely sure, so left it alone. I'd also love to update this for the 1.6 docs. Do we have a way to update historical documentation? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] flimzy commented on issue #120: Update docs for PUT attachments
flimzy commented on issue #120: Update docs for PUT attachments URL: https://github.com/apache/couchdb-documentation/pull/120#issuecomment-291513223 You're right. I modified the wrong headers. I meant to update the PUT description not HEAD. Updated. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] millayr closed pull request #883: Navbar refactor
millayr closed pull request #883: Navbar refactor URL: https://github.com/apache/couchdb-fauxton/pull/883 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] millayr opened a new pull request #883: Navbar refactor
millayr opened a new pull request #883: Navbar refactor URL: https://github.com/apache/couchdb-fauxton/pull/883 Trying to take over https://github.com/apache/couchdb-fauxton/pull/826 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] bshikhar13 closed issue #468: couch_peruser not working
bshikhar13 closed issue #468: couch_peruser not working URL: https://github.com/apache/couchdb/issues/468 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] wohali commented on issue #468: couch_peruser not working
wohali commented on issue #468: couch_peruser not working URL: https://github.com/apache/couchdb/issues/468#issuecomment-291431640 couch_peruser is indeed broken in 2.0.0; there are no plans to fix it at this time. The current plan is to replace the feature entirely with new functionality. If you need couch_peruser functionality please use couchdb 1.6.x. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] bshikhar13 opened a new issue #468: couch_peruser not working
bshikhar13 opened a new issue #468: couch_peruser not working URL: https://github.com/apache/couchdb/issues/468 I am using CouchDB 2.0 on my Windows machine. I created `_users` database. Setted the `couch_peruser` flag in the configuration window to be true. After restarting the CouchDB when I added a user in the _users database, no new database was created. What can be the problem? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services