[GitHub] jiangphcn commented on issue #241: Add statement for multiple queris for _all_docs

2018-01-25 Thread GitBox
jiangphcn commented on issue #241: Add statement for multiple queris for 
_all_docs
URL: 
https://github.com/apache/couchdb-documentation/pull/241#issuecomment-360700379
 
 
   Cool, thanks @flimzy for your clarification and review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sergey-safarov commented on issue #1049: Google Chrome console error, cannot view attachments

2018-01-25 Thread GitBox
sergey-safarov commented on issue #1049: Google Chrome console error, cannot 
view attachments
URL: 
https://github.com/apache/couchdb-fauxton/issues/1049#issuecomment-360695656
 
 
   Hello Alexis (@popojargo)
   Database contains `mp3` audio files.
   
   **Important**
   Console contains same error when opened doc without attachments. I think 
attachment list is not generated because some of scripts failed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hinesmr commented on issue #535: Mango Query won't run on _users database

2018-01-25 Thread GitBox
hinesmr commented on issue #535: Mango Query won't run on _users database
URL: https://github.com/apache/couchdb/issues/535#issuecomment-360688974
 
 
   @ptitjes This is working for me now on version 2.1.1. Thanks guys!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jiangphcn commented on a change in pull request #243: Add description about new endpoint _dbs_info

2018-01-25 Thread GitBox
jiangphcn commented on a change in pull request #243: Add description about new 
endpoint _dbs_info
URL: 
https://github.com/apache/couchdb-documentation/pull/243#discussion_r164025867
 
 

 ##
 File path: src/api/server/common.rst
 ##
 @@ -208,6 +208,115 @@
"locations"
 ]
 
+.. _api/server/dbs_info:
+
+==
+``/_dbs_info``
+==
+
+.. http:post:: /_dbs_info
+:synopsis: Returns information of a list of the specified databases
+
+Returns information of a list of the specified databases in the CouchDB
+instance. This enables you to request information about multiple databases
+in a single request, in place of multiple :get:`/{db}` requests.
+
+:header Content-Type: - :mimetype:`application/json`
+:code 200: Request completed successfully
+
+**Request**:
+
+.. code-block:: http
+
+POST /_dbs_info HTTP/1.1
+Accept: application/json
+Host: localhost:5984
+Content-Type: application/json
+
+{
+"keys": [
+"animals",
+"plants"
+]
+}
+
+**Response**:
+
+.. code-block:: http
+
+HTTP/1.1 200 OK
+Cache-Control: must-revalidate
+Content-Type: application/json
+Date: Sat, 20 Dec 2017 06:57:48 GMT
+Server: CouchDB (Erlang/OTP)
+
+[
+  {
+"key": "animals",
+"info": {
+  "db_name": "animals",
+  "update_seq": "52232",
+  "sizes": {
+"file": 1178613587,
+"external": 1713103872,
+"active": 1162451555
+  },
+  "purge_seq": 0,
+  "other": {
+"data_size": 1713103872
+  },
+  "doc_del_count": 0,
+  "doc_count": 52224,
+  "disk_size": 1178613587,
+  "disk_format_version": 6,
+  "data_size": 1162451555,
+  "compact_running": false,
+  "cluster": {
+"q": 8,
+"n": 3,
+"w": 2,
+"r": 2
+  },
+  "instance_start_time": "0"
+}
+  },
+  {
+"key": "plants",
+"info": {
+  "db_name": "plants",
+  "update_seq": "303",
+  "sizes": {
+"file": 3872387,
+"external": 2339,
+"active": 67475
+  },
+  "purge_seq": 0,
+  "other": {
+"data_size": 2339
+  },
+  "doc_del_count": 0,
+  "doc_count": 11,
+  "disk_size": 3872387,
+  "disk_format_version": 6,
+  "data_size": 67475,
+  "compact_running": false,
+  "cluster": {
+"q": 8,
+"n": 3,
+"w": 2,
+"r": 2
+  },
+  "instance_start_time": "0"
+}
+  }
+]
+
+.. note::
+The supported number of the specified databases in the list can be limited
+by modifying the `max_db_number_for_dbs_info_req` entry in configuration
+file. The default limit is 100.
+
 
 Review comment:
   Added with thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jiangphcn commented on a change in pull request #243: Add description about new endpoint _dbs_info

2018-01-25 Thread GitBox
jiangphcn commented on a change in pull request #243: Add description about new 
endpoint _dbs_info
URL: 
https://github.com/apache/couchdb-documentation/pull/243#discussion_r164024641
 
 

 ##
 File path: src/api/server/common.rst
 ##
 @@ -208,6 +208,115 @@
"locations"
 ]
 
+.. _api/server/dbs_info:
+
+==
+``/_dbs_info``
+==
+
+.. http:post:: /_dbs_info
+:synopsis: Returns information of a list of the specified databases
+
+Returns information of a list of the specified databases in the CouchDB
+instance. This enables you to request information about multiple databases
+in a single request, in place of multiple :get:`/{db}` requests.
+
+:header Content-Type: - :mimetype:`application/json`
+:code 200: Request completed successfully
 
 Review comment:
   @flimzy Hey Jonathan, only `POST request` is supported yet. So there are no 
accepted query parameters. The reason of using `POST request` only instead of 
supporting `GET request` as well are to
   i) avoid the URL length limitations
   ii) avoids URL encoding work for the caller


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gizocz opened a new issue #1051: Failed to retrieve permissions.

2018-01-25 Thread GitBox
gizocz opened a new issue #1051: Failed to retrieve permissions.
URL: https://github.com/apache/couchdb-fauxton/issues/1051
 
 
   Cannot set permissions on databases with '/' or '+' character in the 
database name.
   
   ## Steps to Reproduce
   1. click "Create Database", enter "a/b", click "Create"
   2. click "Permissions" => alert "Failed to retrieve permissions. Please try 
again. Reason:Database does not exist."; error on console: 
"https://localhost:6984/a/b/_security 404 ()". 
   3. try to add a role => alert "Could not update permissions - reason: Error: 
Database does not exist."
   
   ## Environment
   * Version used: Fauxton on Apache CouchDB v. 2.1.1 (from official docker hub 
repository)
   * Browser Name and version: Google Chrome v 63.0.3239.132
   * Operating System and version: Debian GNU/Linux 9.3
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Antonio-Maranhao commented on issue #1050: Fix display of String editor when editing a document

2018-01-25 Thread GitBox
Antonio-Maranhao commented on issue #1050: Fix display of String editor when 
editing a document
URL: https://github.com/apache/couchdb-fauxton/pull/1050#issuecomment-360658031
 
 
   @popojargo the modal was showing up blank - without the editor


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] popojargo commented on issue #1049: Google Chrome console error, cannot view attachments

2018-01-25 Thread GitBox
popojargo commented on issue #1049: Google Chrome console error, cannot view 
attachments
URL: 
https://github.com/apache/couchdb-fauxton/issues/1049#issuecomment-360657498
 
 
   @sergey-safarov  What kind of attachment was that?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] popojargo commented on issue #1049: Google Chrome console error, cannot view attachments

2018-01-25 Thread GitBox
popojargo commented on issue #1049: Google Chrome console error, cannot view 
attachments
URL: 
https://github.com/apache/couchdb-fauxton/issues/1049#issuecomment-360657467
 
 
   I'm not having the error with the current master branch.  
   
   Should we allow blob: as script-src?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] popojargo commented on issue #1050: Fix display of String editor when editing a document

2018-01-25 Thread GitBox
popojargo commented on issue #1050: Fix display of String editor when editing a 
document
URL: https://github.com/apache/couchdb-fauxton/pull/1050#issuecomment-360652827
 
 
   @Antonio-Maranhao  What was the previous display problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] bedney commented on issue #979: couchjs fails to run on High Sierra from binary package

2018-01-25 Thread GitBox
bedney commented on issue #979: couchjs fails to run on High Sierra from binary 
package
URL: https://github.com/apache/couchdb/issues/979#issuecomment-360578447
 
 
   LGTM Jan :+1: - thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] bedney commented on issue #979: couchjs fails to run on High Sierra from binary package

2018-01-25 Thread GitBox
bedney commented on issue #979: couchjs fails to run on High Sierra from binary 
package
URL: https://github.com/apache/couchdb/issues/979#issuecomment-360578447
 
 
   LGTM Jans :+1: - thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Antonio-Maranhao opened a new pull request #1050: Fix display of String editor when editing a document

2018-01-25 Thread GitBox
Antonio-Maranhao opened a new pull request #1050: Fix display of String editor 
when editing a document
URL: https://github.com/apache/couchdb-fauxton/pull/1050
 
 
   ## Overview
   
   String editor 
   
   ## Testing recommendations
   
   Added a new Nightwatch test to check the String Editor.
   
   To manually test it: 
   - Edit or create a new document
   - Click on the "_id" field (or any other String field)
   - Click the button on the editor's gutter
   - The String Editor should show up
   - Modify the value and click Modify Text
   - The value is changed in the code editor
   
   ## Checklist
   
   - [x] Code is written and works correctly;
   - [x] Changes are covered by tests;
   - [ ] Documentation reflects the changes;
   - [ ] Update 
[rebar.config.script](https://github.com/apache/couchdb/blob/master/rebar.config.script)
 with the correct tag once a new Fauxton release is made
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] alleycat58uk commented on issue #979: couchjs fails to run on High Sierra from binary package

2018-01-25 Thread GitBox
alleycat58uk commented on issue #979: couchjs fails to run on High Sierra from 
binary package
URL: https://github.com/apache/couchdb/issues/979#issuecomment-360561787
 
 
   Works for me too.  Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flimzy commented on issue #241: Add statement for multiple queris for _all_docs

2018-01-25 Thread GitBox
flimzy commented on issue #241: Add statement for multiple queris for _all_docs
URL: 
https://github.com/apache/couchdb-documentation/pull/241#issuecomment-360534585
 
 
   > I think that I need to mention that this is newly added feature. Making 
separate section can help explain the difference between previous feature and 
new feature.
   
   Sure. I didn't mean to suggest there shouldn't also be a new section to 
highlight the added functionality, just that it might be nice to ensure that 
the existing endpoint documentation also contains the full documentation for 
that endpoint.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flimzy commented on a change in pull request #243: Add description about new endpoint _dbs_info

2018-01-25 Thread GitBox
flimzy commented on a change in pull request #243: Add description about new 
endpoint _dbs_info
URL: 
https://github.com/apache/couchdb-documentation/pull/243#discussion_r163907626
 
 

 ##
 File path: src/api/server/common.rst
 ##
 @@ -208,6 +208,115 @@
"locations"
 ]
 
+.. _api/server/dbs_info:
+
+==
+``/_dbs_info``
+==
+
+.. http:post:: /_dbs_info
+:synopsis: Returns information of a list of the specified databases
+
+Returns information of a list of the specified databases in the CouchDB
+instance. This enables you to request information about multiple databases
+in a single request, in place of multiple :get:`/{db}` requests.
+
+:header Content-Type: - :mimetype:`application/json`
+:code 200: Request completed successfully
+
+**Request**:
+
+.. code-block:: http
+
+POST /_dbs_info HTTP/1.1
+Accept: application/json
+Host: localhost:5984
+Content-Type: application/json
+
+{
+"keys": [
+"animals",
+"plants"
+]
+}
+
+**Response**:
+
+.. code-block:: http
+
+HTTP/1.1 200 OK
+Cache-Control: must-revalidate
+Content-Type: application/json
+Date: Sat, 20 Dec 2017 06:57:48 GMT
+Server: CouchDB (Erlang/OTP)
+
+[
+  {
+"key": "animals",
+"info": {
+  "db_name": "animals",
+  "update_seq": "52232",
+  "sizes": {
+"file": 1178613587,
+"external": 1713103872,
+"active": 1162451555
+  },
+  "purge_seq": 0,
+  "other": {
+"data_size": 1713103872
+  },
+  "doc_del_count": 0,
+  "doc_count": 52224,
+  "disk_size": 1178613587,
+  "disk_format_version": 6,
+  "data_size": 1162451555,
+  "compact_running": false,
+  "cluster": {
+"q": 8,
+"n": 3,
+"w": 2,
+"r": 2
+  },
+  "instance_start_time": "0"
+}
+  },
+  {
+"key": "plants",
+"info": {
+  "db_name": "plants",
+  "update_seq": "303",
+  "sizes": {
+"file": 3872387,
+"external": 2339,
+"active": 67475
+  },
+  "purge_seq": 0,
+  "other": {
+"data_size": 2339
+  },
+  "doc_del_count": 0,
+  "doc_count": 11,
+  "disk_size": 3872387,
+  "disk_format_version": 6,
+  "data_size": 67475,
+  "compact_running": false,
+  "cluster": {
+"q": 8,
+"n": 3,
+"w": 2,
+"r": 2
+  },
+  "instance_start_time": "0"
+}
+  }
+]
+
+.. note::
+The supported number of the specified databases in the list can be limited
+by modifying the `max_db_number_for_dbs_info_req` entry in configuration
+file. The default limit is 100.
+
 
 Review comment:
   Should a 'version added' note be added?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flimzy commented on a change in pull request #243: Add description about new endpoint _dbs_info

2018-01-25 Thread GitBox
flimzy commented on a change in pull request #243: Add description about new 
endpoint _dbs_info
URL: 
https://github.com/apache/couchdb-documentation/pull/243#discussion_r163907454
 
 

 ##
 File path: src/api/server/common.rst
 ##
 @@ -208,6 +208,115 @@
"locations"
 ]
 
+.. _api/server/dbs_info:
+
+==
+``/_dbs_info``
+==
+
+.. http:post:: /_dbs_info
+:synopsis: Returns information of a list of the specified databases
+
+Returns information of a list of the specified databases in the CouchDB
+instance. This enables you to request information about multiple databases
+in a single request, in place of multiple :get:`/{db}` requests.
+
+:header Content-Type: - :mimetype:`application/json`
+:code 200: Request completed successfully
 
 Review comment:
   What are the accepted query parameters?  I gather `keys` takes an array of 
database names? Are there other supported params to include here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Avaq commented on issue #1081: Replicator infinite failure loop

2018-01-25 Thread GitBox
Avaq commented on issue #1081: Replicator infinite failure loop
URL: https://github.com/apache/couchdb/issues/1081#issuecomment-360500296
 
 
   Hi @nickva, thank you for your response!
   
   > Noticed in the test behavior script you specified a heartbeat. In 2.x 
replicator doesn't use hearbeats, instead it uses timeouts
   
   I didn't know heartbeats were removed. It shouldn't really matter for my 
case though. The only reason I'm specifying a heartbeat in my tests is to make 
the result more visual (so you don't have to wait ten seconds using 1.6 to see 
if something is happening). 
   
   I have adjusted my test.
   
   ```sh
   # We create our test database
   curl -X PUT localhost:5984/replication-source
   
   # We insert a design doc with a filter function that is guaranteed to take 
long
   # The reason is so we can simulate a database with a lot of documents which 
are
   # not going to pass in the filtering process.
   curl -X PUT localhost:5984/replication-source/_design/test -d 
'{"filters":{"test":"function(){var future = Date.now() + 2000; 
while(Date.now() < future){}; return false}"}}'
   
   # We insert a bunch of documents so that filtering them will take time. Note
   # that I increased the number from 20 to 100, because I have more CPU cores
   # this time around (I didn't consider that before).
   for i in {1..100}; do curl -X POST -H 'Content-Type: application/json' 
localhost:5984/replication-source -d '{"foo":"bar"}'; done
   ```
   ```sh
   # I send a request for changes to the database. This request resembles the 
request
   # a replication client might send very closely.
   curl 
'localhost:5984/replication-source/_changes?feed=normal=all_docs=0=test%2Ftest=1'
   ```
   
   I'm getting better results now. I do indeed see the `{"results":[`-line 
printed after about ten seconds, followed by a periodic newline, until finally 
the last sequence number. Unfortunately, this is not what's happening on the 
production environment, but these results are a huge step forward! Thank you.
   
   > To double check, is the replication itself running on a 2.x cluster? What 
are the versions of the targets and source? Are they all 2.x as well?
   
   There is one "central server" to which, and from which, a large number of 
clients push and pull subsets of information. The server runs a 2.x cluster, 
and the clients are single-node CouchDB instances ranging between version 1.6 
on Windows XP and 2.x on Windows 10.
   
   > Are there any proxies or load balancers involved and do you think they 
could affect the connections?
   
   The central server sits behind an nginx reverse proxy, which is now my prime 
suspect. Thank you for pointing this out to me.
   
   > How many replication jobs are running?
   
   There are a few replication jobs running within the central server itself, 
but they do not cause problems. At any given time, some fifty clients running 
their own replication jobs will be polling the server for changes.
   
   > In case of filtered replications, with large source db and a restrictive 
filter, like you have, replications won't checkpoint unless they receive a 
document update via the filter. However if it takes too long and the job is 
swapped out by the scheduler, it might not have chance to checkpoint, it will 
be stopped. Next time starts will use 0 for the changes feed start 0, and it 
will wait again, not get a document, will be stopped, etc.
   
   This sounds a lot like what I thought was happening, but every node only 
runs two replication jobs. One for upstream replication, and one for 
downstream. Neither are continuous.
   
   
   
   I will be investigating whether nginx might be buffering the response before 
sending it along, causing connections to drop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Avaq commented on issue #1081: Replicator infinite failure loop

2018-01-25 Thread GitBox
Avaq commented on issue #1081: Replicator infinite failure loop
URL: https://github.com/apache/couchdb/issues/1081#issuecomment-360500296
 
 
   Hi @nickva, thank you for your response!
   
   > Noticed in the test behavior script you specified a heartbeat. In 2.x 
replicator doesn't use hearbeats, instead it uses timeouts
   
   I didn't know heartbeats were removed. It shouldn't really matter for my 
case though. The only reason I'm specifying a heartbeat in my tests is to make 
the result more visual (so you don't have to wait ten seconds using 1.6 to see 
if something is happening). 
   
   I have adjusted my test.
   
   ```sh
   # We create our test database
   curl -X PUT localhost:5984/replication-source
   
   # We insert a design doc with a filter function that is guaranteed to take 
long
   # The reason is so we can simulate a database with a lot of documents which 
are
   # not going to pass in the filtering process.
   curl -X PUT localhost:5984/replication-source/_design/test -d 
'{"filters":{"test":"function(){var future = Date.now() + 2000; 
while(Date.now() < future){}; return false}"}}'
   
   # We insert a bunch of documents so that filtering them will take time. Note
   # that I increased the number from 20 to 100, because I have more CPU cores
   # this time around (I didn't consider that before).
   for i in {1..100}; do curl -X POST -H 'Content-Type: application/json' 
localhost:5984/replication-source -d '{"foo":"bar"}'; done
   ```
   ```sh
   # I send a request for changes to the database. This request resembles the 
request
   # a replication client might send very closely.
   curl 
'localhost:5984/replication-source/_changes?feed=normal=all_docs=0=test%2Ftest=1'
   ```
   
   I'm getting better results now. I do indeed see the `{"results":[`-line 
printed after about ten seconds, followed by a periodic newline, until finally 
the last sequence number. Unfortunately, this is not what's happening on the 
production environment, but these results are a huge step forward! Thank you.
   
   > To double check, is the replication itself running on a 2.x cluster? What 
are the versions of the targets and source? Are they all 2.x as well?
   
   There is one "central server" to which, and from which, a large number of 
clients push and pull subsets of information. The server runs a 2.x cluster, 
and the clients are single-node CouchDB instances ranging between version 1.6 
on Windows XP and 2.x on Windows 10.
   
   > Are there any proxies or load balancers involved and do you think they 
could affect the connections?
   
   The central server sits behind an nginx reverse proxy, which is now my prime 
suspect. Thank you for pointing this out to me.
   
   > How many replication jobs are running?
   
   There are a few replication jobs running within the central server itself, 
but they do not cause problems. At any given time, some fifty clients running 
their own replication jobs will be polling the server for changes.
   
   > In case of filtered replications, with large source db and a restrictive 
filter, like you have, replications won't checkpoint unless they receive a 
document update via the filter. However if it takes too long and the job is 
swapped out by the scheduler, it might not have chance to checkpoint, it will 
be stopped. Next time starts will use 0 for the changes feed start 0, and it 
will wait again, not get a document, will be stopped, etc.
   
   This sounds a lot like what I thought was happening, but every node only 
runs two replication jobs. One for upstream replication, and one for 
downstream. Neither are continues.
   
   
   
   I will be investigating whether nginx might be buffering the response before 
sending it along, causing connections to drop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Avaq commented on issue #1081: Replicator infinite failure loop

2018-01-25 Thread GitBox
Avaq commented on issue #1081: Replicator infinite failure loop
URL: https://github.com/apache/couchdb/issues/1081#issuecomment-360504857
 
 
   I just added `proxy_buffering off;` to my nginx configuration and I already 
tested it. It solved all of my problems. Thank you so much for thinking about 
this issue! Sorry for taking your time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Avaq commented on issue #1081: Replicator infinite failure loop

2018-01-25 Thread GitBox
Avaq commented on issue #1081: Replicator infinite failure loop
URL: https://github.com/apache/couchdb/issues/1081#issuecomment-360504857
 
 
   I just added `proxy_buffering off;` to my nginx configuration and I already 
tested it. It solved all of my problems. Thank you so much for taking the time 
to think about my problem! Sorry for taking your time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Avaq closed issue #1081: Replicator infinite failure loop

2018-01-25 Thread GitBox
Avaq closed issue #1081: Replicator infinite failure loop
URL: https://github.com/apache/couchdb/issues/1081
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Avaq commented on issue #1081: Replicator infinite failure loop

2018-01-25 Thread GitBox
Avaq commented on issue #1081: Replicator infinite failure loop
URL: https://github.com/apache/couchdb/issues/1081#issuecomment-360502678
 
 
   :man_facepalming: 
https://www.nginx.com/resources/admin-guide/reverse-proxy/#buffers


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nickva commented on issue #1126: Hide credential information in replication document for reader

2018-01-25 Thread GitBox
nickva commented on issue #1126: Hide credential information in replication 
document for reader
URL: https://github.com/apache/couchdb/pull/1126#issuecomment-360502226
 
 
   Looks good. First I thought it was a bit heavy handed to remove all headers 
but it's probably safer. See some requested changes then +1
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nickva commented on a change in pull request #1126: Hide credential information in replication document for reader

2018-01-25 Thread GitBox
nickva commented on a change in pull request #1126: Hide credential information 
in replication document for reader
URL: https://github.com/apache/couchdb/pull/1126#discussion_r163876372
 
 

 ##
 File path: src/couch_replicator/src/couch_replicator_docs.erl
 ##
 @@ -695,7 +695,8 @@ strip_credentials(Url) when is_binary(Url) ->
 "http\\1://\\2",
 [{return, binary}]);
 strip_credentials({Props}) ->
-{lists:keydelete(<<"oauth">>, 1, Props)}.
+Props0 = lists:keydelete(<<"oauth">>, 1, Props),
 
 Review comment:
   There are a few unit tests below, let's add a few tests for the new code as 
well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nickva commented on a change in pull request #1126: Hide credential information in replication document for reader

2018-01-25 Thread GitBox
nickva commented on a change in pull request #1126: Hide credential information 
in replication document for reader
URL: https://github.com/apache/couchdb/pull/1126#discussion_r163876672
 
 

 ##
 File path: src/couch_replicator/src/couch_replicator_docs.erl
 ##
 @@ -695,7 +695,8 @@ strip_credentials(Url) when is_binary(Url) ->
 "http\\1://\\2",
 [{return, binary}]);
 strip_credentials({Props}) ->
-{lists:keydelete(<<"oauth">>, 1, Props)}.
+Props0 = lists:keydelete(<<"oauth">>, 1, Props),
 
 Review comment:
   Let's remove the bugzid bit from the commit as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nickva commented on a change in pull request #1126: Hide credential information in replication document for reader

2018-01-25 Thread GitBox
nickva commented on a change in pull request #1126: Hide credential information 
in replication document for reader
URL: https://github.com/apache/couchdb/pull/1126#discussion_r163875958
 
 

 ##
 File path: src/couch_replicator/src/couch_replicator_docs.erl
 ##
 @@ -695,7 +695,8 @@ strip_credentials(Url) when is_binary(Url) ->
 "http\\1://\\2",
 [{return, binary}]);
 strip_credentials({Props}) ->
-{lists:keydelete(<<"oauth">>, 1, Props)}.
+Props0 = lists:keydelete(<<"oauth">>, 1, Props),
 
 Review comment:
   Minor nit on variable names --  `Props1` might work a bit better here.
   
   Usually when adding new code towards the middle or the bottom of the 
function, start adding 1,2,3 like so `Props, Props1=newoperation(Props)..,  
...use Props1 from now on...`. If there is a large function that uses `Props` 
in the body and we need to do modify it at the beginning of the function, then 
it's better to do ```function(Prop0 =...) -> Props = newoperation(Props0), 
```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Avaq commented on issue #1081: Replicator infinite failure loop

2018-01-25 Thread GitBox
Avaq commented on issue #1081: Replicator infinite failure loop
URL: https://github.com/apache/couchdb/issues/1081#issuecomment-360500296
 
 
   Hi @nickva, thank you for your response!
   
   > Noticed in the test behavior script you specified a heartbeat. In 2.x 
replicator doesn't use hearbeats, instead it uses timeouts
   
   I didn't know heartbeats were removed. It shouldn't really matter for my 
case though. The only reason I'm specifying a heartbeat in my tests is to make 
the result more visual (so you don't have to wait ten seconds using 1.6 to see 
if something is happening). 
   
   I have adjusted my test.
   
   ```sh
   # We create our test database
   curl -X PUT localhost:5984/replication-source
   
   # We insert a design doc with a filter function that is guaranteed to take 
long
   # The reason is so we can simulate a database with a lot of documents which 
are
   # not going to pass in the filtering process.
   curl -X PUT localhost:5984/replication-source/_design/test -d 
'{"filters":{"test":"function(){var future = Date.now() + 2000; 
while(Date.now() < future){}; return false}"}}'
   
   # We insert a bunch of documents so that filtering them will take time. Note
   # that I increased the number from 20 to 100, because I have more CPU cores
   # this time around (I didn't consider that before).
   for i in {1..100}; do curl -X POST -H 'Content-Type: application/json' 
localhost:5984/replication-source -d '{"foo":"bar"}'; done
   ```
   ```sh
   # I send a request for changes to the database. This request resembles the 
request
   # a replication client might send very closely.
   curl 
'localhost:5984/replication-source/_changes?feed=normal=all_docs=0=test%2Ftest=1'
   ```
   
   I'm getting better results now. I do indeed see the `{"results":[`-line 
printed after about ten seconds, followed by a periodic newline, until finally 
the last sequence number. Unfortunately, this is not what's happening on the 
production environment, but these results are a huge step forward! Thank you.
   
   > To double check, is the replication itself running on a 2.x cluster? What 
are the versions of the targets and source? Are they all 2.x as well?
   
   There is one "central server" to which, and from which, a large number of 
clients push and pull subsets of information. The server runs a 2.x cluster, 
and the clients are single-node CouchDB instances ranging between version 1.6 
on Windows XP and 2.x on Windows 10.
   
   > Are there any proxies or load balancers involved and do you think they 
could affect the connections?
   
   The central server sits behind an nginx reverse proxy, which is now my prime 
suspect. Thank you for pointing this out to me.
   
   > How many replication jobs are running?
   
   There are a few replication jobs running within the central server itself, 
but they do not cause problem. At any given time, some fifty clients running 
their own replication jobs will be polling the server for changes.
   
   > In case of filtered replications, with large source db and a restrictive 
filter, like you have, replications won't checkpoint unless they receive a 
document update via the filter. However if it takes too long and the job is 
swapped out by the scheduler, it might not have chance to checkpoint, it will 
be stopped. Next time starts will use 0 for the changes feed start 0, and it 
will wait again, not get a document, will be stopped, etc.
   
   This sounds a lot like what I thought was happening, but every node only 
runs two replication jobs. One for upstream replication, and one for 
downstream. Neither are continues.
   
   
   
   I will be investigating whether nginx might be buffering the response before 
sending it along, causing connections to drop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Avaq commented on issue #1081: Replicator infinite failure loop

2018-01-25 Thread GitBox
Avaq commented on issue #1081: Replicator infinite failure loop
URL: https://github.com/apache/couchdb/issues/1081#issuecomment-360500296
 
 
   Hi @nickva, thank you for your response!
   
   > Noticed in the test behavior script you specified a heartbeat. In 2.x 
replicator doesn't use hearbeats, instead it uses timeouts
   
   I didn't know heartbeats were removed. It shouldn't really matter for my 
case though. The only reason I'm specifying a heartbeat in my tests is to make 
the result more visual (so you don't have to wait ten seconds using 1.6 to see 
if something is happening). 
   
   I have adjusted my test.
   
   ```sh
   # We create our test database
   curl -X PUT localhost:5984/replication-source
   
   # We insert a design doc with a filter function that is guaranteed to take 
long
   # The reason is so we can simulate a database with a lot of documents which 
are
   # not going to pass in the filtering process.
   curl -X PUT localhost:5984/replication-source/_design/test -d 
'{"filters":{"test":"function(){var future = Date.now() + 2000; 
while(Date.now() < future){}; return false}"}}'
   
   # We insert a bunch of documents so that filtering them will take time. Note
   # that I increased the number from 20 to 100, because I have more CPU cores
   # this time around (I didn't consider that before).
   for i in {1..100}; do curl -X POST -H 'Content-Type: application/json' 
localhost:5984/replication-source -d '{"foo":"bar"}'; done
   ```
   ```sh
   # I send a request for changes to the database. This request resembles the 
request
   # a replication client might send very closely.
   curl 
'localhost:5984/replication-source/_changes?feed=normal=all_docs=0=test%2Ftest=1'
   ```
   
   I'm getting better results now. I do indeed see the `{"results":[`-line 
printed after about ten seconds, followed by a periodic newline, until finally 
the last sequence number. Unfortunately, this is not what's happening on the 
production environment, but these results are a huge step forward! Thank you.
   
   > To double check, is the replication itself running on a 2.x cluster? What 
are the versions of the targets and source? Are they all 2.x as well?
   
   There is once "central server" to which, and from which, a large number of 
clients push and pull subsets of information. The server runs a 2.x cluster, 
and the clients are single-node CouchDB instances ranging between version 1.6 
on Windows XP and 2.x on Windows 10.
   
   > Are there any proxies or load balancers involved and do you think they 
could affect the connections?
   
   The central server sits behind an nginx reverse proxy, which is now my prime 
suspect. Thank you for pointing this out to me.
   
   > How many replication jobs are running?
   
   There are a few replication jobs running within the central server itself, 
but they do not cause problem. At any given time, some fifty clients running 
their own replication jobs will be polling the server for changes.
   
   > In case of filtered replications, with large source db and a restrictive 
filter, like you have, replications won't checkpoint unless they receive a 
document update via the filter. However if it takes too long and the job is 
swapped out by the scheduler, it might not have chance to checkpoint, it will 
be stopped. Next time starts will use 0 for the changes feed start 0, and it 
will wait again, not get a document, will be stopped, etc.
   
   This sounds a lot like what I thought was happening, but every node only 
runs two replication jobs. One for upstream replication, and one for 
downstream. Neither are continues.
   
   
   
   I will be investigating whether nginx might be buffering the response before 
sending it along, causing connections to drop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Avaq commented on issue #1081: Replicator infinite failure loop

2018-01-25 Thread GitBox
Avaq commented on issue #1081: Replicator infinite failure loop
URL: https://github.com/apache/couchdb/issues/1081#issuecomment-360500296
 
 
   Hi @nickva, thank you for your response!
   
   > Noticed in the test behavior script you specified a heartbeat. In 2.x 
replicator doesn't use hearbeats, instead it uses timeouts
   
   I didn't know heartbeats were removed. It shouldn't really matter for my 
case though. The only reason I'm specifying a heartbeat in my tests is to make 
the result more visual (so you don't have to wait ten seconds using 1.6 to see 
if something is happening). 
   
   I have adjusted my test.
   
   ```sh
   # We create our test database
   curl -X PUT localhost:5984/replication-source
   
   # We insert a design doc with a filter function that is guaranteed to take 
long
   # The reason is so we can simulate a database with a lot of documents which 
are
   # not going to pass in the filtering process.
   curl -X PUT localhost:5984/replication-source/_design/test -d 
'{"filters":{"test":"function(){var future = Date.now() + 2000; 
while(Date.now() < future){}; return false}"}}'
   
   # We insert a bunch of documents so that filtering them will take time. Note
   # that I increased the number from 20 to 100, because I have more CPU cores
   # this time around (I didn't consider that before).
   for i in {1..100}; do curl -X POST -H 'Content-Type: application/json' 
localhost:5984/replication-source -d '{"foo":"bar"}'; done
   ```
   ```sh
   # I send a request for changes to the database. This request resembles the 
request
   # a replication client might send very closely.
   curl 
'localhost:5984/replication-source/_changes?feed=normal=all_docs=0=test%2Ftest=1'
   ```
   
   I'm getting better results now. I do indeed see the `{"results":[`-line 
printed after about ten seconds, followed by a periodic newline, until finally 
returning the last sequence number. Unfortunately, this is not what's happening 
on the production environment, but these results are a huge step forward! Thank 
you.
   
   > To double check, is the replication itself running on a 2.x cluster? What 
are the versions of the targets and source? Are they all 2.x as well?
   
   There is once "central server" to which, and from which, a large number of 
clients push and pull subsets of information. The server runs a 2.x cluster, 
and the clients are single-node CouchDB instances ranging between version 1.6 
on Windows XP and 2.x on Windows 10.
   
   > Are there any proxies or load balancers involved and do you think they 
could affect the connections?
   
   The central server sits behind an nginx reverse proxy, which is now my prime 
suspect. Thank you for pointing this out to me.
   
   > How many replication jobs are running?
   
   There are a few replication jobs running within the central server itself, 
but they do not cause problem. At any given time, some fifty clients running 
their own replication jobs will be polling the server for changes.
   
   > In case of filtered replications, with large source db and a restrictive 
filter, like you have, replications won't checkpoint unless they receive a 
document update via the filter. However if it takes too long and the job is 
swapped out by the scheduler, it might not have chance to checkpoint, it will 
be stopped. Next time starts will use 0 for the changes feed start 0, and it 
will wait again, not get a document, will be stopped, etc.
   
   This sounds a lot like what I thought was happening, but every node only 
runs two replication jobs. One for upstream replication, and one for 
downstream. Neither are continues.
   
   
   
   I will be investigating whether nginx might be buffering the response before 
sending it along, causing connections to drop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Avaq commented on issue #1081: Replicator infinite failure loop

2018-01-25 Thread GitBox
Avaq commented on issue #1081: Replicator infinite failure loop
URL: https://github.com/apache/couchdb/issues/1081#issuecomment-360500296
 
 
   Hi @nickva, thank you for your response!
   
   > Noticed in the test behavior script you specified a heartbeat. In 2.x 
replicator doesn't use hearbeats, instead it uses timeouts
   
   I didn't know heartbeats were removed. It doesn't really matter for my case 
though. The only reason I'm specifying a heartbeat in my tests is to make the 
result more visual (so you don't have to wait ten seconds using 1.6 to see if 
something is happening). 
   
   I have adjusted my test.
   
   ```sh
   # We create our test database
   curl -X PUT localhost:5984/replication-source
   
   # We insert a design doc with a filter function that is guaranteed to take 
long
   # The reason is so we can simulate a database with a lot of documents which 
are
   # not going to pass in the filtering process.
   curl -X PUT localhost:5984/replication-source/_design/test -d 
'{"filters":{"test":"function(){var future = Date.now() + 2000; 
while(Date.now() < future){}; return false}"}}'
   
   # We insert a bunch of documents so that filtering them will take time. Note
   # that I increased the number from 20 to 100, because I have more CPU cores
   # this time around (I didn't consider that before).
   for i in {1..100}; do curl -X POST -H 'Content-Type: application/json' 
localhost:5984/replication-source -d '{"foo":"bar"}'; done
   ```
   ```sh
   # I send a request for changes to the database. This request resembles the 
request
   # a replication client might send very closely.
   curl 
'localhost:5984/replication-source/_changes?feed=normal=all_docs=0=test%2Ftest=1'
   ```
   
   I'm getting better results now. I do indeed see the `{"results":[`-line 
printed after about ten seconds, followed by a periodic newline, until finally 
returning the last sequence number. Unfortunately, this is not what's happening 
on the production environment, but these results are a huge step forward! Thank 
you.
   
   > To double check, is the replication itself running on a 2.x cluster? What 
are the versions of the targets and source? Are they all 2.x as well?
   
   There is once "central server" to which, and from which, a large number of 
clients push and pull subsets of information. The server runs a 2.x cluster, 
and the clients are single-node CouchDB instances ranging between version 1.6 
on Windows XP and 2.x on Windows 10.
   
   > Are there any proxies or load balancers involved and do you think they 
could affect the connections?
   
   The central server sits behind an nginx reverse proxy, which is now my prime 
suspect. Thank you for pointing this out to me.
   
   > How many replication jobs are running?
   
   There are a few replication jobs running within the central server itself, 
but they do not cause problem. At any given time, some fifty clients running 
their own replication jobs will be polling the server for changes.
   
   > In case of filtered replications, with large source db and a restrictive 
filter, like you have, replications won't checkpoint unless they receive a 
document update via the filter. However if it takes too long and the job is 
swapped out by the scheduler, it might not have chance to checkpoint, it will 
be stopped. Next time starts will use 0 for the changes feed start 0, and it 
will wait again, not get a document, will be stopped, etc.
   
   This sounds a lot like what I thought was happening, but every node only 
runs two replication jobs. One for upstream replication, and one for 
downstream. Neither are continues.
   
   
   
   I will be investigating whether nginx might be buffering the response before 
sending it along, causing connections to drop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Avaq commented on issue #1081: Replicator infinite failure loop

2018-01-25 Thread GitBox
Avaq commented on issue #1081: Replicator infinite failure loop
URL: https://github.com/apache/couchdb/issues/1081#issuecomment-360500296
 
 
   Hi @nickva, thank you for your response!
   
   > Noticed in the test behavior script you specified a heartbeat. In 2.x 
replicator doesn't use hearbeats, instead it uses timeouts
   
   I didn't know heartbeats were removed. It doesn't really matter for my case 
though. The only reason I'm specifying a heartbeat in my tests is to make the 
result more visual (so you don't have to wait ten seconds using 1.6 to see if 
something is happening). 
   
   I have adjusted my test.
   
   ```sh
   # We create our test database
   curl -X PUT localhost:5984/replication-source
   
   # We insert a design doc with a filter function that is guaranteed to take 
long
   # The reason is so we can simulate a database with a lot of documents which 
are
   # not going to pass in the filtering process.
   curl -X PUT localhost:5984/replication-source/_design/test -d 
'{"filters":{"test":"function(){var future = Date.now() + 2000; 
while(Date.now() < future){}; return false}"}}'
   
   # We insert a bunch of documents so that filtering them will take time. Note
   # that I increased the number from 20 to 100, because I have more CPU cores
   # this time around (I didn't consider that before).
   for i in {1..100}; do curl -X POST -H 'Content-Type: application/json' 
localhost:5984/replication-source -d '{"foo":"bar"}'; done
   ```
   ```sh
   # I send a request for changes to the database. This request resembles the 
request
   # a replication client might send very closely.
   curl 
'localhost:5984/replication-source/_changes?feed=normal=all_docs=0=test%2Ftest=1'
   ```
   
   I'm getting better results now. I do indeed see the `{"results":[`-line 
printed after about ten seconds, followed by a periodic newline, until finally 
returning the last sequence number. Unfortunately, this is not what's happening 
on the production environment, but these results are a huge step forward! Thank 
you.
   
   > To double check, is the replication itself running on a 2.x cluster? What 
are the versions of the targets and source? Are they all 2.x as well?
   
   There is once "central server" to which, and from which, a large number of 
clients push and pull subsets of information. The server runs a 2.x cluster, 
and the clients are single-node CouchDB instances ranging between version 1.6 
on Windows XP and 2.x on Windows 10.
   
   > Are there any proxies or load balancers involved and do you think they 
could affect the connections?
   
   The central server sits behind an nginx reverse proxy, which is now my prime 
suspect. Thank you for pointing this out to me.
   
   > How many replication jobs are running?
   
   There are a few replication jobs running within the central server itself, 
but they do not cause problem. At any given time, some fifty clients running 
their own replication jobs will be polling the server for changes.
   
   > In case of filtered replications, with large source db and a restrictive 
filter, like you have, replications won't checkpoint unless they receive a 
document update via the filter. However if it takes too long and the job is 
swapped out by the scheduler, it might not have chance to checkpoint, it will 
be stopped. Next time starts will use 0 for the changes feed start 0, and it 
will wait again, not get a document, will be stopped, etc.
   
   This sounds a lot like what I thought was happening, but every node only 
runs two replication jobs. One for upstream replication, and one for 
downstream. Neither are continues.
   
   
   
   I will be investigating whether nginx might be buffering the response before 
sending it along, causing connection to drop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] garrensmith closed pull request #1048: Run Nightwatch tests with suiteRetries

2018-01-25 Thread GitBox
garrensmith closed pull request #1048: Run Nightwatch tests with suiteRetries
URL: https://github.com/apache/couchdb-fauxton/pull/1048
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/.travis.yml b/.travis.yml
index 16e203d91..8070bf9ec 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -17,7 +17,7 @@ before_script:
   - DIST=./dist/debug ./bin/fauxton &
   - sleep 30
 script:
-  - travis_retry ./node_modules/.bin/grunt nightwatch
+  - ./node_modules/.bin/grunt nightwatch_retries
 after_script:
   - npm run docker:down
 
diff --git a/Gruntfile.js b/Gruntfile.js
index 6ca330ec3..215bbf4a2 100644
--- a/Gruntfile.js
+++ b/Gruntfile.js
@@ -164,6 +164,14 @@ module.exports = function (grunt) {
 options: {
   maxBuffer: 1000 * 1024
 }
+  },
+  start_nightWatch_with_retries: {
+command: 'node ' + __dirname + 
'/node_modules/nightwatch/bin/nightwatch' +
+' -c ' + __dirname + '/test/nightwatch_tests/nightwatch.json' +
+' --suiteRetries 3',
+options: {
+  maxBuffer: 1000 * 1024
+}
   }
 },
 
@@ -253,4 +261,6 @@ module.exports = function (grunt) {
*/
   //Start Nightwatch test from terminal, using: $ grunt nightwatch
   grunt.registerTask('nightwatch', ['initNightwatch', 
'exec:start_nightWatch']);
+  //Same as above but the Nightwatch runner will retry tests 3 times before 
failing
+  grunt.registerTask('nightwatch_retries', ['initNightwatch', 
'exec:start_nightWatch_with_retries']);
 };
diff --git a/package.json b/package.json
index cdd5fdb09..83a26dd06 100644
--- a/package.json
+++ b/package.json
@@ -130,6 +130,7 @@
 "dev": "node ./devserver.js",
 "devtests": "webpack-dev-server --config webpack.config.test-dev.js 
--debug --progress",
 "nightwatch": "grunt nightwatch",
+"nightwatch_retries": "grunt nightwatch_retries",
 "start": "node ./bin/fauxton",
 "start-debug": "DIST=./dist/debug node ./bin/fauxton",
 "preversion": "node version-check.js && grunt release",


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jjrodrig opened a new pull request #1127: Fix for issue #603 - Error 500 when creating a db below quorum

2018-01-25 Thread GitBox
jjrodrig opened a new pull request #1127: Fix for issue #603 - Error 500 when 
creating a db below quorum
URL: https://github.com/apache/couchdb/pull/1127
 
 
   
   
   ## Overview
   Current behaviour of DB creation in a cluster degradated situation is not 
consistent with the general behaviour described for document creation in the 
same situation.
   
   > The number of copies of a document with the same revision that have to be 
read before CouchDB returns with a 200 is equal to a half of total copies of 
the document plus one. It is the same for the number of nodes that need to save 
a document before a write is returned with 201. If there are less nodes than 
that number, then 202 is returned. Both read and write numbers can be specified 
with a request as r and w parameters accordingly.
   
   The current behaviour for database creation in a cluster is:
- Database creation returns 201 - Created if all nodes responds ok
- Database creation returns 202 - Accepted if the quorum is met
- Database creation returns 500 - Error if the responses are bellow quorum
   
   The quorum is the default: Number of nodes/2 +1
   
   This PR changes the database creation result with the following behaviour:
   - Database creation returns 201 - Creation if the quorum is met
   - Database creation returns 202 - Accepted if at least one node responds ok
   - Database creation returns 500 - Error if there is no correct response from 
any node
   
   ## Testing recommendations
   
   - All previous tests are ok
   - I didn't identify testing infraestructure to test cluster degradation 
issues. 
   - I've focused on chttpd and javascript tests 
   - I've skiped reduce_builtin.js test as it is failing even in master branch
   `test/javascript/tests/reduce_builtin.js
   Error: {gen_server,call,[<0.2426.1>,{get_state,49},infinity]}`
   
   `make check apps=chttpd ignore_js_suites=reduce_builtin`
   
   ## Related Issues or Pull Requests
   Issue #603 
   
   ## Checklist
   
   - [x] Code is written and works correctly;
   - [x] Changes are covered by tests; (DB creation yes, Cluster degradation no)
   - [ ] Documentation reflects the changes;
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jiangphcn commented on issue #233: Update documentation to describe queries support for /_all_docs

2018-01-25 Thread GitBox
jiangphcn commented on issue #233: Update documentation to describe queries 
support for /_all_docs
URL: 
https://github.com/apache/couchdb-documentation/issues/233#issuecomment-360408702
 
 
   can track with https://github.com/apache/couchdb-documentation/pull/241


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sergey-safarov commented on issue #1125: Very slow replication for some of databases

2018-01-25 Thread GitBox
sergey-safarov commented on issue #1125: Very slow replication for some of 
databases 
URL: https://github.com/apache/couchdb/issues/1125#issuecomment-360407395
 
 
   I increased `max_http_request_size` up to 10 times - issue is still persist.
   I can create VM with installed couchdb master and one database that cannot 
be replicated.
   If you provide your public ssh key, then you can look issue on host.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jiangphcn opened a new pull request #243: Add description about new endpoint _dbs_info

2018-01-25 Thread GitBox
jiangphcn opened a new pull request #243: Add description about new endpoint 
_dbs_info
URL: https://github.com/apache/couchdb-documentation/pull/243
 
 
   
   
   
   
   ## Overview
   
   
   
   Introduce description about `_dbs_info` to allow to returns information of a 
list of the specified databases in the CouchDB instance. This enables you to 
request information about multiple databases in a single request, in place of 
multiple GET /{db} requests.
   
   ## Testing recommendations
   
   
   See test in https://github.com/apache/couchdb/pull/1082
   
   ## GitHub issue number
   
   
   https://github.com/apache/couchdb/issues/822
   
   ## Related Pull Requests
   
   
   https://github.com/apache/couchdb/issues/822
   https://github.com/apache/couchdb/pull/1082
   
   ## Checklist
   
   - [X] Documentation is written and is accurate;
   - [X] `make check` passes with no errors
   - [ ] Update 
[rebar.config.script](https://github.com/apache/couchdb/blob/master/rebar.config.script)
 with the commit hash once this PR is rebased and merged
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jiangphcn commented on issue #241: Add statement for multiple queris for _all_docs

2018-01-25 Thread GitBox
jiangphcn commented on issue #241: Add statement for multiple queris for 
_all_docs
URL: 
https://github.com/apache/couchdb-documentation/pull/241#issuecomment-360393439
 
 
   Thanks Jonathan @flimzy. I just added commit to change ` documents` to 
`queries` to make it clearer. 
   
   Regarding comment below
   > Should the queries parameter not be added to the existing POST 
/{db}/_all_docs documentation? Having a separate example like this is probably 
still useful, but since the new functionality is added to an existing endpoint, 
I would hate not to have complete documentation in one canonical place, as well.
   
   I think that I need to mention that this is newly added feature. Making 
separate section can help explain the difference between previous feature and 
new feature. In addition, reader can easily find the newly added section 
because it is just near to `POST /{db}/_all_docs` documentation. In addition, 
current organization is same as page in 
http://docs.couchdb.org/en/latest/api/ddoc/views.html
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services