[jira] [Closed] (COUCHDB-2668) Regression: attachment Etag references to document revision, not to digest

2015-04-19 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson closed COUCHDB-2668.
--
Resolution: Fixed

 Regression: attachment Etag references to document revision, not to digest
 --

 Key: COUCHDB-2668
 URL: https://issues.apache.org/jira/browse/COUCHDB-2668
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Database Core, HTTP Interface
Reporter: Alexander Shorin
Assignee: Robert Newson
Priority: Blocker
  Labels: regression
 Fix For: 2.0.0


 In CouchDB 1.6.1:
 {code}
 $ curl -XPUT http://localhost:5984/db/doc/att -d 'Hello, CouchDB!' -H 
 Content-Type: text/plain
 {ok:true,id:doc,rev:1-3ead39fb9d2538602f817e0bdc00fe26}
 $ curl -XGET -v http://localhost:5984/db/doc/att
 * Hostname was NOT found in DNS cache
 *   Trying 127.0.0.1...
 * Connected to localhost (127.0.0.1) port 5984 (#0)
  GET /db/doc/att HTTP/1.1
  User-Agent: curl/7.39.0
  Host: localhost:5984
  Accept: */*
  
  HTTP/1.1 200 OK
  Server: CouchDB/1.7.0 (Erlang OTP/17)
  ETag: +4vGmBKGmQoMe7ojcTyiSA==
  Date: Sun, 19 Apr 2015 01:17:41 GMT
  Content-Type: text/plain
  Content-Length: 15
  Cache-Control: must-revalidate
  Accept-Ranges: none
  
 * Connection #0 to host localhost left intact
 Hello, CouchDB!%   
 {code}
 In CouchDB 2.0:
 {code}
 $ curl -XPUT http://localhost:15984/db/doc/att -d 'Hello, CouchDB!' -H 
 Content-Type: text/plain
 {ok:true,id:doc,rev:1-3ead39fb9d2538602f817e0bdc00fe26}
 $ curl -XGET -v http://localhost:15984/db/doc/att
 * Hostname was NOT found in DNS cache
 *   Trying 127.0.0.1...
 * Connected to localhost (127.0.0.1) port 15984 (#0)
  GET /db/doc/att HTTP/1.1
  User-Agent: curl/7.39.0
  Host: localhost:15984
  Accept: */*
  
  HTTP/1.1 200 OK
  Server: CouchDB/42c9047 (Erlang OTP/17)
  ETag: 1-3ead39fb9d2538602f817e0bdc00fe26
  Date: Sun, 19 Apr 2015 01:14:11 GMT
  Content-Type: text/plain
  Content-Length: 15
  Cache-Control: must-revalidate
  Accept-Ranges: none
  
 * Connection #0 to host localhost left intact
 Hello, CouchDB!%
 {code}
 If document becomes updated, but attachment does not, their Etag changes as 
 well:
 {code}
 $ curl -XGET -v http://localhost:15984/db/doc/att
 * Hostname was NOT found in DNS cache
 *   Trying 127.0.0.1...
 * Connected to localhost (127.0.0.1) port 15984 (#0)
  GET /db/doc/att HTTP/1.1
  User-Agent: curl/7.39.0
  Host: localhost:15984
  Accept: */*
  
  HTTP/1.1 200 OK
  Server: CouchDB/42c9047 (Erlang OTP/17)
  ETag: 3-90c8a1947115ffb2032a217ed3d3c624
  Date: Sun, 19 Apr 2015 01:15:55 GMT
  Content-Type: text/plain
  Content-Length: 15
  Cache-Control: must-revalidate
  Accept-Ranges: none
  
 * Connection #0 to host localhost left intact
 Hello, CouchDB!%
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (COUCHDB-2668) Regression: attachment Etag references to document revision, not to digest

2015-04-19 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson reopened COUCHDB-2668:


 Regression: attachment Etag references to document revision, not to digest
 --

 Key: COUCHDB-2668
 URL: https://issues.apache.org/jira/browse/COUCHDB-2668
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Database Core, HTTP Interface
Reporter: Alexander Shorin
Assignee: Robert Newson
Priority: Blocker
  Labels: regression
 Fix For: 2.0.0


 In CouchDB 1.6.1:
 {code}
 $ curl -XPUT http://localhost:5984/db/doc/att -d 'Hello, CouchDB!' -H 
 Content-Type: text/plain
 {ok:true,id:doc,rev:1-3ead39fb9d2538602f817e0bdc00fe26}
 $ curl -XGET -v http://localhost:5984/db/doc/att
 * Hostname was NOT found in DNS cache
 *   Trying 127.0.0.1...
 * Connected to localhost (127.0.0.1) port 5984 (#0)
  GET /db/doc/att HTTP/1.1
  User-Agent: curl/7.39.0
  Host: localhost:5984
  Accept: */*
  
  HTTP/1.1 200 OK
  Server: CouchDB/1.7.0 (Erlang OTP/17)
  ETag: +4vGmBKGmQoMe7ojcTyiSA==
  Date: Sun, 19 Apr 2015 01:17:41 GMT
  Content-Type: text/plain
  Content-Length: 15
  Cache-Control: must-revalidate
  Accept-Ranges: none
  
 * Connection #0 to host localhost left intact
 Hello, CouchDB!%   
 {code}
 In CouchDB 2.0:
 {code}
 $ curl -XPUT http://localhost:15984/db/doc/att -d 'Hello, CouchDB!' -H 
 Content-Type: text/plain
 {ok:true,id:doc,rev:1-3ead39fb9d2538602f817e0bdc00fe26}
 $ curl -XGET -v http://localhost:15984/db/doc/att
 * Hostname was NOT found in DNS cache
 *   Trying 127.0.0.1...
 * Connected to localhost (127.0.0.1) port 15984 (#0)
  GET /db/doc/att HTTP/1.1
  User-Agent: curl/7.39.0
  Host: localhost:15984
  Accept: */*
  
  HTTP/1.1 200 OK
  Server: CouchDB/42c9047 (Erlang OTP/17)
  ETag: 1-3ead39fb9d2538602f817e0bdc00fe26
  Date: Sun, 19 Apr 2015 01:14:11 GMT
  Content-Type: text/plain
  Content-Length: 15
  Cache-Control: must-revalidate
  Accept-Ranges: none
  
 * Connection #0 to host localhost left intact
 Hello, CouchDB!%
 {code}
 If document becomes updated, but attachment does not, their Etag changes as 
 well:
 {code}
 $ curl -XGET -v http://localhost:15984/db/doc/att
 * Hostname was NOT found in DNS cache
 *   Trying 127.0.0.1...
 * Connected to localhost (127.0.0.1) port 15984 (#0)
  GET /db/doc/att HTTP/1.1
  User-Agent: curl/7.39.0
  Host: localhost:15984
  Accept: */*
  
  HTTP/1.1 200 OK
  Server: CouchDB/42c9047 (Erlang OTP/17)
  ETag: 3-90c8a1947115ffb2032a217ed3d3c624
  Date: Sun, 19 Apr 2015 01:15:55 GMT
  Content-Type: text/plain
  Content-Length: 15
  Cache-Control: must-revalidate
  Accept-Ranges: none
  
 * Connection #0 to host localhost left intact
 Hello, CouchDB!%
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (COUCHDB-2668) Regression: attachment Etag references to document revision, not to digest

2015-04-19 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-2668.

Resolution: Fixed

 Regression: attachment Etag references to document revision, not to digest
 --

 Key: COUCHDB-2668
 URL: https://issues.apache.org/jira/browse/COUCHDB-2668
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Database Core, HTTP Interface
Reporter: Alexander Shorin
Priority: Blocker
 Fix For: 2.0.0


 In CouchDB 1.6.1:
 {code}
 $ curl -XPUT http://localhost:5984/db/doc/att -d 'Hello, CouchDB!' -H 
 Content-Type: text/plain
 {ok:true,id:doc,rev:1-3ead39fb9d2538602f817e0bdc00fe26}
 $ curl -XGET -v http://localhost:5984/db/doc/att
 * Hostname was NOT found in DNS cache
 *   Trying 127.0.0.1...
 * Connected to localhost (127.0.0.1) port 5984 (#0)
  GET /db/doc/att HTTP/1.1
  User-Agent: curl/7.39.0
  Host: localhost:5984
  Accept: */*
  
  HTTP/1.1 200 OK
  Server: CouchDB/1.7.0 (Erlang OTP/17)
  ETag: +4vGmBKGmQoMe7ojcTyiSA==
  Date: Sun, 19 Apr 2015 01:17:41 GMT
  Content-Type: text/plain
  Content-Length: 15
  Cache-Control: must-revalidate
  Accept-Ranges: none
  
 * Connection #0 to host localhost left intact
 Hello, CouchDB!%   
 {code}
 In CouchDB 2.0:
 {code}
 $ curl -XPUT http://localhost:15984/db/doc/att -d 'Hello, CouchDB!' -H 
 Content-Type: text/plain
 {ok:true,id:doc,rev:1-3ead39fb9d2538602f817e0bdc00fe26}
 $ curl -XGET -v http://localhost:15984/db/doc/att
 * Hostname was NOT found in DNS cache
 *   Trying 127.0.0.1...
 * Connected to localhost (127.0.0.1) port 15984 (#0)
  GET /db/doc/att HTTP/1.1
  User-Agent: curl/7.39.0
  Host: localhost:15984
  Accept: */*
  
  HTTP/1.1 200 OK
  Server: CouchDB/42c9047 (Erlang OTP/17)
  ETag: 1-3ead39fb9d2538602f817e0bdc00fe26
  Date: Sun, 19 Apr 2015 01:14:11 GMT
  Content-Type: text/plain
  Content-Length: 15
  Cache-Control: must-revalidate
  Accept-Ranges: none
  
 * Connection #0 to host localhost left intact
 Hello, CouchDB!%
 {code}
 If document becomes updated, but attachment does not, their Etag changes as 
 well:
 {code}
 $ curl -XGET -v http://localhost:15984/db/doc/att
 * Hostname was NOT found in DNS cache
 *   Trying 127.0.0.1...
 * Connected to localhost (127.0.0.1) port 15984 (#0)
  GET /db/doc/att HTTP/1.1
  User-Agent: curl/7.39.0
  Host: localhost:15984
  Accept: */*
  
  HTTP/1.1 200 OK
  Server: CouchDB/42c9047 (Erlang OTP/17)
  ETag: 3-90c8a1947115ffb2032a217ed3d3c624
  Date: Sun, 19 Apr 2015 01:15:55 GMT
  Content-Type: text/plain
  Content-Length: 15
  Cache-Control: must-revalidate
  Accept-Ranges: none
  
 * Connection #0 to host localhost left intact
 Hello, CouchDB!%
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2237) Add a 'live' sugar for 'continuous'

2015-04-04 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395812#comment-14395812
 ] 

Robert Newson commented on COUCHDB-2237:


Let's get some votes on the original feed=live (as alias for feed=continuous) 
suggestion then.

I'm +1, it's clearly been accepted/endorsed by the pouchdb community so the 
initial concerns about confusion have not played out.


 Add a 'live' sugar for 'continuous'
 ---

 Key: COUCHDB-2237
 URL: https://issues.apache.org/jira/browse/COUCHDB-2237
 Project: CouchDB
  Issue Type: Improvement
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Dale Harvey

 With PouchDB we generally try to stick to the same param names as Couch, we 
 are even changing some we implemented first to be compatible 
 (https://github.com/pouchdb/pouchdb/issues/2193)
 However 'continuous' sucks to type, its confusing to type and spell and 
 everyone gets it wrong, we still support it but switched to documenting it as 
 'live' and life is awesome again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2594) Single node mode: remove warning

2015-04-04 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395953#comment-14395953
 ] 

Robert Newson commented on COUCHDB-2594:


I think the 2.0 blocker list included a button or screen in Fauxton to allow 
the user to choose single node versus add more nodes and only after that 
would database creation succeed.

Joan is right that we have not addressed the behavior for nodes added *after* 
initial cluster setup is complete.

However, I don't think the post facto log warning has really helped anyone 
figure out their mistake (and the only way to fix it is to delete and recreate).

A strong mechanism would be to remove the silent reduction of N when creating a 
database, to instead return a 400 or 500 error message (we'll have to bikeshed 
a little on whether the user is in error for asking for more replicas than the 
cluster can create or the server is in error for not having enough nodes to 
satisfy the user).

When choosing single node in setup, the default N value should be set to 1, 
so that database creations do not return the error.

Should the administrator subsequently add a node, we should increase the 
default N for all nodes, up to some threshold (3 would be a good choice). 
Alternately, the act of adding the node through Fauxton could ask the 
administrator for the new N value (or at least confirm our new suggestion of 
min(3, number_of_nodes_in_the_new_cluster))

All of this is a UI nicety over the raw mechanics of fabric and mem3, which 
would not be substantially altered. For expert users like Cloudant, all cluster 
operations would be automated by some other system which ultimately alters both 
the contents of the INI files and the runtime state of the config application.



 Single node mode: remove warning
 

 Key: COUCHDB-2594
 URL: https://issues.apache.org/jira/browse/COUCHDB-2594
 Project: CouchDB
  Issue Type: Task
  Security Level: public(Regular issues) 
  Components: Database Core
Reporter: Robert Kowalski
Priority: Blocker
 Fix For: 2.0.0


 we have to remove a warning that is sent as response if the node is not 
 joined into a multi-node cluster and has no membership



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-1652) CouchDB does not release disk space from .deleted files

2015-04-02 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392449#comment-14392449
 ] 

Robert Newson commented on COUCHDB-1652:


does this ever happen without the compaction daemon enabled? I wonder if it's 
holding the db open (which would be ironic...)

 CouchDB does not release disk space from .deleted files
 ---

 Key: COUCHDB-1652
 URL: https://issues.apache.org/jira/browse/COUCHDB-1652
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.2
 Environment: Ubuntu 8.04, 10.04, 12.04 , CouchDB 1.2.0, CouchDB 
 1.2.1, Erlang 14b4 64bit platform
Reporter: Simon Eisenmann
Assignee: Randall Leeds

 I noticed on long running servers a disk space increase over time. Though the 
 database in itself did not grow. After investigation i found that there are 
 lots of files still open which have already been deleted.
 I see this on all installations on Ubuntu 8.04, 10.04 and 12.04 all 64bit 
 with CouchDB 1.2.0 and 1.2.1 with Erlang 14b4.
 lsof |grep deleted
 beam.smp   4845  couchdb   24u  REG  254,1   5890159 
 171082
 /var/lib/couchdb/.delete/582878aeee568e3c06dc3262fdac494b (deleted)
 beam.smp   4845  couchdb   25u  REG  254,1   5890159 
 171080
 /var/lib/couchdb/.delete/95b92c25e3d7ea2f2045a2ee37afd8fe (deleted)
 beam.smp   4845  couchdb   26u  REG  254,1   5890183 
 171081
 /var/lib/couchdb/.delete/bb616c2baae507b7ec890451650e40c9 (deleted)
 beam.smp   4845  couchdb   27u  REG  254,1   5890159 
 244324
 /var/lib/couchdb/.delete/f7a4ba016d098c34f1ff97ca61a824f8 (deleted)
 beam.smp   4845  couchdb   28u  REG  254,1   5890159 
 171196
 CouchDB should release the file pointer on those files, to release the disk 
 space to tools like df.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (COUCHDB-2655) Indicate it read quorum (r) was reached

2015-04-02 Thread Robert Newson (JIRA)
Robert Newson created COUCHDB-2655:
--

 Summary: Indicate it read quorum (r) was reached
 Key: COUCHDB-2655
 URL: https://issues.apache.org/jira/browse/COUCHDB-2655
 Project: CouchDB
  Issue Type: New Feature
  Security Level: public (Regular issues)
Reporter: Robert Newson






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-1652) CouchDB does not release disk space from .deleted files

2015-03-31 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389403#comment-14389403
 ] 

Robert Newson commented on COUCHDB-1652:


Hi,

This could be erlang. We (Cloudant) know that R14B01 has a rare bug wherein 
close a file does not cause the vm to release the file descriptor. Over time 
these accumulate. If that's the cause here, the only solution is to restart the 
service or upgrade your erlang version.

 CouchDB does not release disk space from .deleted files
 ---

 Key: COUCHDB-1652
 URL: https://issues.apache.org/jira/browse/COUCHDB-1652
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.2
 Environment: Ubuntu 8.04, 10.04, 12.04 , CouchDB 1.2.0, CouchDB 
 1.2.1, Erlang 14b4 64bit platform
Reporter: Simon Eisenmann
Assignee: Randall Leeds

 I noticed on long running servers a disk space increase over time. Though the 
 database in itself did not grow. After investigation i found that there are 
 lots of files still open which have already been deleted.
 I see this on all installations on Ubuntu 8.04, 10.04 and 12.04 all 64bit 
 with CouchDB 1.2.0 and 1.2.1 with Erlang 14b4.
 lsof |grep deleted
 beam.smp   4845  couchdb   24u  REG  254,1   5890159 
 171082
 /var/lib/couchdb/.delete/582878aeee568e3c06dc3262fdac494b (deleted)
 beam.smp   4845  couchdb   25u  REG  254,1   5890159 
 171080
 /var/lib/couchdb/.delete/95b92c25e3d7ea2f2045a2ee37afd8fe (deleted)
 beam.smp   4845  couchdb   26u  REG  254,1   5890183 
 171081
 /var/lib/couchdb/.delete/bb616c2baae507b7ec890451650e40c9 (deleted)
 beam.smp   4845  couchdb   27u  REG  254,1   5890159 
 244324
 /var/lib/couchdb/.delete/f7a4ba016d098c34f1ff97ca61a824f8 (deleted)
 beam.smp   4845  couchdb   28u  REG  254,1   5890159 
 171196
 CouchDB should release the file pointer on those files, to release the disk 
 space to tools like df.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (COUCHDB-2654) Support Content-Range for Attachment PUT requests

2015-03-30 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson closed COUCHDB-2654.
--
Resolution: Won't Fix

This has been raised before. Unfortunately the spec has been clarified that 
this is not permitted;

rfc 7231

An origin server that allows PUT on a given target resource MUST send
   a 400 (Bad Request) response to a PUT request that contains a
   Content-Range header field (Section 4.2 of [RFC7233]), since the
   payload is likely to be partial content that has been mistakenly PUT
   as a full representation.  Partial content updates are possible by
   targeting a separately identified resource with state that overlaps a
   portion of the larger resource, or by using a different method that
   has been specifically defined for partial updates (for example, the
   PATCH method defined in [RFC5789]).

 Support Content-Range for Attachment PUT requests
 -

 Key: COUCHDB-2654
 URL: https://issues.apache.org/jira/browse/COUCHDB-2654
 Project: CouchDB
  Issue Type: Improvement
  Security Level: public(Regular issues) 
  Components: Database Core, HTTP Interface
Reporter: Matthias Reik

 This ticket is a result of my question on 
 [stackoverflow|http://stackoverflow.com/questions/29228210/does-couchdb-suppport-content-range-in-attachment-put-requests]
 It would be nice to add support for updating/editing/adding to an existing 
 attachment by making use of the Content-Range header on PUT requests. It is 
 already supported for GET requests (even though not sure whether it's 
 according to [RFC 
 2616|http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.16])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (COUCHDB-2638) CouchDB should not be writing /etc/couchdb/local.ini

2015-03-15 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-2638.

Resolution: Invalid

Runtime reconfiguration of CouchDB is performed via the /_config endpoint, we 
recommend that over editing local.ini on disk.

local.ini is designed to be written to by CouchDB when the configuration 
changes, it's deliberate.

 CouchDB should not be writing /etc/couchdb/local.ini
 

 Key: COUCHDB-2638
 URL: https://issues.apache.org/jira/browse/COUCHDB-2638
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
Reporter: Yuri
 Fix For: 2.0.0


 I am getting such messages in log on FreeBSD:
  Could not write config file /usr/local/etc/couchdb/local.ini: permission 
  denied
 The problem is that CoachDB supplies the original copy of local.ini, and it 
 is treated as a template for this configuration file. It is placed into 
 /usr/local/etc/couchdb/local.ini.sample, and its copy is placed into 
 /usr/local/etc/couchdb/local.ini. Everything under /etc is what admin 
 configures. Ideally admin can compare local.ini and local.ini.sample and see 
 if anything in default configuration was modified compared to the suggested 
 sample.
 When the executable itself modifies local.ini too, this makes it very 
 confusing. Admin will be confused if he should or shouldn't touch this file.
 My suggestion is that CouchDB should copy local.ini under /var/db/, or 
 somewhere else, and write it there. /etc isn't supposed to be writable by the 
 process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2638) CouchDB should not be writing /etc/couchdb/local.ini

2015-03-15 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14362396#comment-14362396
 ] 

Robert Newson commented on COUCHDB-2638:


It would be useful if Yuri could confirm my last comment about the uuid 
(temporarily grant write permission, then diff local.ini with a copy taken from 
before startup).

 CouchDB should not be writing /etc/couchdb/local.ini
 

 Key: COUCHDB-2638
 URL: https://issues.apache.org/jira/browse/COUCHDB-2638
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
Reporter: Yuri
 Fix For: 2.0.0


 I am getting such messages in log on FreeBSD:
  Could not write config file /usr/local/etc/couchdb/local.ini: permission 
  denied
 The problem is that CoachDB supplies the original copy of local.ini, and it 
 is treated as a template for this configuration file. It is placed into 
 /usr/local/etc/couchdb/local.ini.sample, and its copy is placed into 
 /usr/local/etc/couchdb/local.ini. Everything under /etc is what admin 
 configures. Ideally admin can compare local.ini and local.ini.sample and see 
 if anything in default configuration was modified compared to the suggested 
 sample.
 When the executable itself modifies local.ini too, this makes it very 
 confusing. Admin will be confused if he should or shouldn't touch this file.
 My suggestion is that CouchDB should copy local.ini under /var/db/, or 
 somewhere else, and write it there. /etc isn't supposed to be writable by the 
 process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2638) CouchDB should not be writing /etc/couchdb/local.ini

2015-03-15 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14362384#comment-14362384
 ] 

Robert Newson commented on COUCHDB-2638:


On boot, couchdb will write a newly-generated uuid to local.ini if one is not 
present, I suspect that's the error the OP is seeing. It's easily solved, 
therefore, by supplying a uuid through other means. Besides this, only 
administrators calling PUT /_config will alter the config.

My point is that couchdb must be able to write out updated configuration values 
and that place is defined as local.ini.

From my pov, it's difficult to imagine administering a couchdb server without 
being able to alter configuration at runtime (and without restarting the 
service).


 CouchDB should not be writing /etc/couchdb/local.ini
 

 Key: COUCHDB-2638
 URL: https://issues.apache.org/jira/browse/COUCHDB-2638
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
Reporter: Yuri
 Fix For: 2.0.0


 I am getting such messages in log on FreeBSD:
  Could not write config file /usr/local/etc/couchdb/local.ini: permission 
  denied
 The problem is that CoachDB supplies the original copy of local.ini, and it 
 is treated as a template for this configuration file. It is placed into 
 /usr/local/etc/couchdb/local.ini.sample, and its copy is placed into 
 /usr/local/etc/couchdb/local.ini. Everything under /etc is what admin 
 configures. Ideally admin can compare local.ini and local.ini.sample and see 
 if anything in default configuration was modified compared to the suggested 
 sample.
 When the executable itself modifies local.ini too, this makes it very 
 confusing. Admin will be confused if he should or shouldn't touch this file.
 My suggestion is that CouchDB should copy local.ini under /var/db/, or 
 somewhere else, and write it there. /etc isn't supposed to be writable by the 
 process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (COUCHDB-2536) During replication, documents with the same key are not properly replaced

2015-01-08 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson closed COUCHDB-2536.
--
Resolution: Not a Problem

This is by design.

When you replicated A to B you introduce a conflict on B. Your B server 
contains both the foo and bar branch of your mykey document. A 
consistent, but arbitrary tie-breaker algorithm selected foo to display when 
fetching the document (though you can retrieve the other or all versions if you 
wished). When you deleted foo on B, you promoted the other branch.

CouchDB replication does not overwrite data, it will retain all your concurrent 
edits.

 During replication, documents with the same key are not properly replaced
 -

 Key: COUCHDB-2536
 URL: https://issues.apache.org/jira/browse/COUCHDB-2536
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Database Core
Affects Versions: 1.6.1
 Environment: Windows 7 (64bit)
Reporter: Dennis

 Hi,
 We have two database A and B.. Database A contains documents having the same 
 ids as documents in database B. Our goal was to replace the documents of 
 database B with the documents of database A, iff the document have the same 
 id. Therefore, we replicated the content of database A to database B. At 
 first it seems to work perfectly but then we discovered the following issues:
 If we delete a document in database B which was replaced by a document of 
 database A, then the document which was replaced reappeared.
 Minimal setup to reproduce this behaviour: 
 Database A contains a document {_id: mykey, content : foo}
 Database B contains a document {_id: mykey, content : bar}
 Replicate database A to Database B (using the CouchDB replicator).
 Database B now contains a document {_id: mykey, content : foo} as 
 expected. This document has now previous versions.
 If document with the key mykey is deleted in database B, the document 
 {_id: mykey, content : bar} reappears in the database.
 Why does the replaced document reappear? Is this the intended behaviour of 
 CouchDB or a bug?
 We expected to get a conflict during replication or that maybe the existing 
 document in database B is set as the previous version of the document, by 
 which it was replaced. But the current behaviour was unexpected.
 We are using CouchDB 1.6.1 on Windows 7 (64bit).
 Best regards
 Dennis



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2536) During replication, documents with the same key are not properly replaced

2015-01-08 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269124#comment-14269124
 ] 

Robert Newson commented on COUCHDB-2536:


Noting here that the replicator will not fail with a 409 in this case (which 
sounds like what you were expected), it introduces new branches instead. This 
is by design.

 During replication, documents with the same key are not properly replaced
 -

 Key: COUCHDB-2536
 URL: https://issues.apache.org/jira/browse/COUCHDB-2536
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Database Core
Affects Versions: 1.6.1
 Environment: Windows 7 (64bit)
Reporter: Dennis

 Hi,
 We have two database A and B.. Database A contains documents having the same 
 ids as documents in database B. Our goal was to replace the documents of 
 database B with the documents of database A, iff the document have the same 
 id. Therefore, we replicated the content of database A to database B. At 
 first it seems to work perfectly but then we discovered the following issues:
 If we delete a document in database B which was replaced by a document of 
 database A, then the document which was replaced reappeared.
 Minimal setup to reproduce this behaviour: 
 Database A contains a document {_id: mykey, content : foo}
 Database B contains a document {_id: mykey, content : bar}
 Replicate database A to Database B (using the CouchDB replicator).
 Database B now contains a document {_id: mykey, content : foo} as 
 expected. This document has now previous versions.
 If document with the key mykey is deleted in database B, the document 
 {_id: mykey, content : bar} reappears in the database.
 Why does the replaced document reappear? Is this the intended behaviour of 
 CouchDB or a bug?
 We expected to get a conflict during replication or that maybe the existing 
 document in database B is set as the previous version of the document, by 
 which it was replaced. But the current behaviour was unexpected.
 We are using CouchDB 1.6.1 on Windows 7 (64bit).
 Best regards
 Dennis



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2535) Crash when replicating doc that exceeds max_document_size

2015-01-06 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14266637#comment-14266637
 ] 

Robert Newson commented on COUCHDB-2535:


noting that the expected result is not what we'd do, we won't pretend this is a 
validate_doc_update failure when it isn't. It's more like failing to write a 
design document on the target if not using admin creds.

That said, I'm not keen on a replication 'succeeding' (i.e, completing) without 
copying all the (non-design) documents. I think it *should* crash/fail until 
the target is able to take all the documents or the user applies a filter to 
avoid documents that can't be replicated, but I'd like to hear from other devs.

 Crash when replicating doc that exceeds max_document_size
 -

 Key: COUCHDB-2535
 URL: https://issues.apache.org/jira/browse/COUCHDB-2535
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Replication
Affects Versions: 1.5.1
Reporter: Rami Alia

 DB SOURCE has a max_document_size of 400MB, DB TARGET has a max_document_size 
 of 40MB. Attempt to replicate a doc greater than 40MB from SOURCE to TARGET.
 Observed result:
 SOURCE replicator crashes followed by SOURCE couchdb crashing
 Expected result:
 SOURCE/TARGET handle this as gracefully as a validation fail and not crash 
 replication or couchdb



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2535) Crash when replicating doc that exceeds max_document_size

2015-01-06 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14266682#comment-14266682
 ] 

Robert Newson commented on COUCHDB-2535:


Obviously we should do better than crash.

The question for me is what we do for partial replications. This has bothered 
me for a while as it occurs in other circumstances (design docs silently not 
copied if not admin, things that fail the filter or vdu, etc). I suppose as 
long as we bump the docs_failed count (or whatever it's called) then it's no 
worse that we have done in the past, but who sees that count? Can you see it 
for feed=continuous at all? (it's in the http response to _replicate calls, in 
case no one has any idea what I'm referring to).

 Crash when replicating doc that exceeds max_document_size
 -

 Key: COUCHDB-2535
 URL: https://issues.apache.org/jira/browse/COUCHDB-2535
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Replication
Affects Versions: 1.5.1
Reporter: Rami Alia

 DB SOURCE has a max_document_size of 400MB, DB TARGET has a max_document_size 
 of 40MB. Attempt to replicate a doc greater than 40MB from SOURCE to TARGET.
 Observed result:
 SOURCE replicator crashes followed by SOURCE couchdb crashing
 Expected result:
 SOURCE/TARGET handle this as gracefully as a validation fail and not crash 
 replication or couchdb



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (COUCHDB-1637) The replicator should not worry about case sensitive headers

2014-12-22 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-1637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-1637.

   Resolution: Fixed
Fix Version/s: 2.0.0

 The replicator should not worry about case sensitive headers
 

 Key: COUCHDB-1637
 URL: https://issues.apache.org/jira/browse/COUCHDB-1637
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.2
Reporter: Christian Tellnes
 Fix For: 2.0.0

 Attachments: couchdb.out


 This is a problem if you are using a proxy which lowercases all headers.
 Steps to reproduce using node-http-proxy:
 npm install -g http-proxy
 node-http-proxy --port 1337 --target localhost:5984 
 curl -H 'Content-Type: application/json' 
 'http://localhost:5984/_replicator' -d 
 '{source:https://localhost:1337/test_source/,target:test_target}'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2310) Add a bulk API for revs open_revs

2014-12-18 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14252179#comment-14252179
 ] 

Robert Newson commented on COUCHDB-2310:


I can't say that I agree, no. _bulk_get is not a good name, it doesn't tell you 
what you're getting. We earlier proposed /_bulk_revs which at least hints at 
what you're getting back (aka a whole bunch of document revisions).

It's a shame we couldn't see a way to extend the existing bulk get API (POST 
/_all_docs), having two seems awkward in comparison. I appreciate that we 
raised and discussed some compatibility issues earlier.



 Add a bulk API for revs  open_revs
 ---

 Key: COUCHDB-2310
 URL: https://issues.apache.org/jira/browse/COUCHDB-2310
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Nolan Lawson

 CouchDB replication is too slow.
 And what makes it so slow is that it's just so unnecessarily chatty. During 
 replication, you have to do a separate GET for each individual document, in 
 order to get the full {{_revisions}} object for that document (using the 
 {{revs}} and {{open_revs}} parameters ndash; refer to [the TouchDB 
 writeup|https://github.com/couchbaselabs/TouchDB-iOS/wiki/Replication-Algorithm]
  or [Benoit's writeup|http://dataprotocols.org/couchdb-replication/] if you 
 need a refresher).
 So for example, let's say you've got a database full of 10,000 documents, and 
 you replicate using a batch size of 500 (batch sizes are configurable in 
 PouchDB). The conversation for a single batch basically looks like this:
 {code}
 - REPLICATOR: gimme 500 changes since seq X (1 GET request)
   - SOURCE: okay
 - REPLICATOR: gimme the _revs_diff for these 500 docs/_revs (1 POST request)
   - SOURCE: okay
 - repeat 500 times:
   - REPLICATOR: gimme the _revisions for doc n with _revs [...] (1 GET 
 request)
 - SOURCE: okay
 - REPLICATOR: here's a _bulk_docs with 500 documents (1 POST request)
 - TARGET: okay
 {code}
 See the problem here? That 500-loop, where we have to do a GET for each one 
 of 500 documents, is a lot of unnecessary back-and-forth, considering that 
 the replicator already knows what it needs before the loop starts. You can 
 parallelize, but if you assume a browser (e.g. for PouchDB), most browsers 
 only let you do ~8 simultaneous requests at once. Plus, there's latency and 
 HTTP headers to consider. So overall, it's not cool.
 So why do we even need to do the separate requests? Shouldn't {{_all_docs}} 
 be good enough? Turns out it's not, because we need this special 
 {{_revisions}} object.
 For example, consider a document {{'foo'}} with 10 revisions. You may compact 
 the database, in which case revisions {{1-x}} through {{9-x}} are no longer 
 retrievable. However, if you query using {{revs}} and {{open_revs}}, those 
 rev IDs are still available:
 {code}
 $ curl 'http://nolan.iriscouch.com/test/foo?revs=trueopen_revs=all'
 {
   _id: foo,
   _rev: 10-c78e199ad5e996b240c9d6482907088e,
   _revisions: {
 start: 10,
 ids: [
   c78e199ad5e996b240c9d6482907088e,
   f560283f1968a05046f0c38e468006bb,
   0091198554171c632c27c8342ddec5af,
   e0a023e2ea59db73f812ad773ea08b17,
   65d7f8b8206a244035edd9f252f206ad,
   069d1432a003c58bdd23f01ff80b718f,
   d21f26bb604b7fe9eba03ce4562cf37b,
   31d380f99a6e54875855e1c24469622d,
   3b4791360024426eadafe31542a2c34b,
   967a00dff5e02add41819138abb3284d
 ]
   }
 }
 {code}
 And in the replication algorithm, _this full \_revisions object is required_ 
 at the point when you copy the document from one database to another, which 
 is accomplished with a POST to {{_bulk_docs}} using {{new_edits=false}}. If 
 you don't have the full {{_revisions}} object, CouchDB accepts the new 
 revision, but considers it to be a conflict. (The exception is with 
 generation-1 documents, since they have no history, so as it says in the 
 TouchDB writeup, you can safely just use {{_all_docs}} as an optimization for 
 such documents.)
 And unfortunately, this {{_revision}} object is only available from the {{GET 
 /:dbid/:docid}} endpoint. Trust me; I've tried the other APIs. You can't get 
 it anywhere else.
 This is a huge problem, especially in PouchDB where we often have to deal 
 with CORS, meaning the number of HTTP requests is doubled. So for those 500 
 GETs, it's an extra 500 OPTIONs, which is just unacceptable.
 Replication does not have to be slow. While we were experimenting with ways 
 of fetching documents in bulk, we tried a technique that just relied on using 
 {{_changes}} with {{include_docs=true}} 
 ([|\#2472|https://github.com/pouchdb/pouchdb/pull/2472]). This pushed 
 conflicts into the target database, but on the upside, you can sync ~95k 
 documents 

[jira] [Commented] (COUCHDB-2310) Add a bulk API for revs open_revs

2014-12-18 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14252181#comment-14252181
 ] 

Robert Newson commented on COUCHDB-2310:


finally, the intent to make everything accessible in bulk using POST's seems to 
ruin our RESTful nature. Is there another way to pursue performance 
enhancements without going that far? I personally hate all the bulk endpoints 
(each added pretty much ad-hoc for much the same reason motivating this ticket).

 Add a bulk API for revs  open_revs
 ---

 Key: COUCHDB-2310
 URL: https://issues.apache.org/jira/browse/COUCHDB-2310
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Nolan Lawson

 CouchDB replication is too slow.
 And what makes it so slow is that it's just so unnecessarily chatty. During 
 replication, you have to do a separate GET for each individual document, in 
 order to get the full {{_revisions}} object for that document (using the 
 {{revs}} and {{open_revs}} parameters ndash; refer to [the TouchDB 
 writeup|https://github.com/couchbaselabs/TouchDB-iOS/wiki/Replication-Algorithm]
  or [Benoit's writeup|http://dataprotocols.org/couchdb-replication/] if you 
 need a refresher).
 So for example, let's say you've got a database full of 10,000 documents, and 
 you replicate using a batch size of 500 (batch sizes are configurable in 
 PouchDB). The conversation for a single batch basically looks like this:
 {code}
 - REPLICATOR: gimme 500 changes since seq X (1 GET request)
   - SOURCE: okay
 - REPLICATOR: gimme the _revs_diff for these 500 docs/_revs (1 POST request)
   - SOURCE: okay
 - repeat 500 times:
   - REPLICATOR: gimme the _revisions for doc n with _revs [...] (1 GET 
 request)
 - SOURCE: okay
 - REPLICATOR: here's a _bulk_docs with 500 documents (1 POST request)
 - TARGET: okay
 {code}
 See the problem here? That 500-loop, where we have to do a GET for each one 
 of 500 documents, is a lot of unnecessary back-and-forth, considering that 
 the replicator already knows what it needs before the loop starts. You can 
 parallelize, but if you assume a browser (e.g. for PouchDB), most browsers 
 only let you do ~8 simultaneous requests at once. Plus, there's latency and 
 HTTP headers to consider. So overall, it's not cool.
 So why do we even need to do the separate requests? Shouldn't {{_all_docs}} 
 be good enough? Turns out it's not, because we need this special 
 {{_revisions}} object.
 For example, consider a document {{'foo'}} with 10 revisions. You may compact 
 the database, in which case revisions {{1-x}} through {{9-x}} are no longer 
 retrievable. However, if you query using {{revs}} and {{open_revs}}, those 
 rev IDs are still available:
 {code}
 $ curl 'http://nolan.iriscouch.com/test/foo?revs=trueopen_revs=all'
 {
   _id: foo,
   _rev: 10-c78e199ad5e996b240c9d6482907088e,
   _revisions: {
 start: 10,
 ids: [
   c78e199ad5e996b240c9d6482907088e,
   f560283f1968a05046f0c38e468006bb,
   0091198554171c632c27c8342ddec5af,
   e0a023e2ea59db73f812ad773ea08b17,
   65d7f8b8206a244035edd9f252f206ad,
   069d1432a003c58bdd23f01ff80b718f,
   d21f26bb604b7fe9eba03ce4562cf37b,
   31d380f99a6e54875855e1c24469622d,
   3b4791360024426eadafe31542a2c34b,
   967a00dff5e02add41819138abb3284d
 ]
   }
 }
 {code}
 And in the replication algorithm, _this full \_revisions object is required_ 
 at the point when you copy the document from one database to another, which 
 is accomplished with a POST to {{_bulk_docs}} using {{new_edits=false}}. If 
 you don't have the full {{_revisions}} object, CouchDB accepts the new 
 revision, but considers it to be a conflict. (The exception is with 
 generation-1 documents, since they have no history, so as it says in the 
 TouchDB writeup, you can safely just use {{_all_docs}} as an optimization for 
 such documents.)
 And unfortunately, this {{_revision}} object is only available from the {{GET 
 /:dbid/:docid}} endpoint. Trust me; I've tried the other APIs. You can't get 
 it anywhere else.
 This is a huge problem, especially in PouchDB where we often have to deal 
 with CORS, meaning the number of HTTP requests is doubled. So for those 500 
 GETs, it's an extra 500 OPTIONs, which is just unacceptable.
 Replication does not have to be slow. While we were experimenting with ways 
 of fetching documents in bulk, we tried a technique that just relied on using 
 {{_changes}} with {{include_docs=true}} 
 ([|\#2472|https://github.com/pouchdb/pouchdb/pull/2472]). This pushed 
 conflicts into the target database, but on the upside, you can sync ~95k 
 documents from npm's skimdb repository to the browser in less than 20 
 minutes! (See [npm-browser.com|http://npm-browser.com] for a 

[jira] [Commented] (COUCHDB-2310) Add a bulk API for revs open_revs

2014-12-18 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14252514#comment-14252514
 ] 

Robert Newson commented on COUCHDB-2310:


as an addendum, we could support _bulk_get as (deprecated) alias to _bulk_revs 
and remove it the version after. And I agree that the API of _bulk_get looks 
good to me, though I note that the rendering is somewhat broken.


 Add a bulk API for revs  open_revs
 ---

 Key: COUCHDB-2310
 URL: https://issues.apache.org/jira/browse/COUCHDB-2310
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Nolan Lawson

 CouchDB replication is too slow.
 And what makes it so slow is that it's just so unnecessarily chatty. During 
 replication, you have to do a separate GET for each individual document, in 
 order to get the full {{_revisions}} object for that document (using the 
 {{revs}} and {{open_revs}} parameters ndash; refer to [the TouchDB 
 writeup|https://github.com/couchbaselabs/TouchDB-iOS/wiki/Replication-Algorithm]
  or [Benoit's writeup|http://dataprotocols.org/couchdb-replication/] if you 
 need a refresher).
 So for example, let's say you've got a database full of 10,000 documents, and 
 you replicate using a batch size of 500 (batch sizes are configurable in 
 PouchDB). The conversation for a single batch basically looks like this:
 {code}
 - REPLICATOR: gimme 500 changes since seq X (1 GET request)
   - SOURCE: okay
 - REPLICATOR: gimme the _revs_diff for these 500 docs/_revs (1 POST request)
   - SOURCE: okay
 - repeat 500 times:
   - REPLICATOR: gimme the _revisions for doc n with _revs [...] (1 GET 
 request)
 - SOURCE: okay
 - REPLICATOR: here's a _bulk_docs with 500 documents (1 POST request)
 - TARGET: okay
 {code}
 See the problem here? That 500-loop, where we have to do a GET for each one 
 of 500 documents, is a lot of unnecessary back-and-forth, considering that 
 the replicator already knows what it needs before the loop starts. You can 
 parallelize, but if you assume a browser (e.g. for PouchDB), most browsers 
 only let you do ~8 simultaneous requests at once. Plus, there's latency and 
 HTTP headers to consider. So overall, it's not cool.
 So why do we even need to do the separate requests? Shouldn't {{_all_docs}} 
 be good enough? Turns out it's not, because we need this special 
 {{_revisions}} object.
 For example, consider a document {{'foo'}} with 10 revisions. You may compact 
 the database, in which case revisions {{1-x}} through {{9-x}} are no longer 
 retrievable. However, if you query using {{revs}} and {{open_revs}}, those 
 rev IDs are still available:
 {code}
 $ curl 'http://nolan.iriscouch.com/test/foo?revs=trueopen_revs=all'
 {
   _id: foo,
   _rev: 10-c78e199ad5e996b240c9d6482907088e,
   _revisions: {
 start: 10,
 ids: [
   c78e199ad5e996b240c9d6482907088e,
   f560283f1968a05046f0c38e468006bb,
   0091198554171c632c27c8342ddec5af,
   e0a023e2ea59db73f812ad773ea08b17,
   65d7f8b8206a244035edd9f252f206ad,
   069d1432a003c58bdd23f01ff80b718f,
   d21f26bb604b7fe9eba03ce4562cf37b,
   31d380f99a6e54875855e1c24469622d,
   3b4791360024426eadafe31542a2c34b,
   967a00dff5e02add41819138abb3284d
 ]
   }
 }
 {code}
 And in the replication algorithm, _this full \_revisions object is required_ 
 at the point when you copy the document from one database to another, which 
 is accomplished with a POST to {{_bulk_docs}} using {{new_edits=false}}. If 
 you don't have the full {{_revisions}} object, CouchDB accepts the new 
 revision, but considers it to be a conflict. (The exception is with 
 generation-1 documents, since they have no history, so as it says in the 
 TouchDB writeup, you can safely just use {{_all_docs}} as an optimization for 
 such documents.)
 And unfortunately, this {{_revision}} object is only available from the {{GET 
 /:dbid/:docid}} endpoint. Trust me; I've tried the other APIs. You can't get 
 it anywhere else.
 This is a huge problem, especially in PouchDB where we often have to deal 
 with CORS, meaning the number of HTTP requests is doubled. So for those 500 
 GETs, it's an extra 500 OPTIONs, which is just unacceptable.
 Replication does not have to be slow. While we were experimenting with ways 
 of fetching documents in bulk, we tried a technique that just relied on using 
 {{_changes}} with {{include_docs=true}} 
 ([|\#2472|https://github.com/pouchdb/pouchdb/pull/2472]). This pushed 
 conflicts into the target database, but on the upside, you can sync ~95k 
 documents from npm's skimdb repository to the browser in less than 20 
 minutes! (See [npm-browser.com|http://npm-browser.com] for a demo.)
 What an amazing story we could tell about the beauty of CouchDB replication, 
 if 

[jira] [Commented] (COUCHDB-2310) Add a bulk API for revs open_revs

2014-12-18 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14252527#comment-14252527
 ] 

Robert Newson commented on COUCHDB-2310:


and noting that we don't have a good way to indicate deprecated features except 
in documentation (which people read only if they encounter a problem). Another 
reason why I'm down on 1.7 (which, last I heard, was going to somehow help 
people transition to 2.0 by deprecating features, but no good mechanism was 
devised for that to my knowledge).

 Add a bulk API for revs  open_revs
 ---

 Key: COUCHDB-2310
 URL: https://issues.apache.org/jira/browse/COUCHDB-2310
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Nolan Lawson

 CouchDB replication is too slow.
 And what makes it so slow is that it's just so unnecessarily chatty. During 
 replication, you have to do a separate GET for each individual document, in 
 order to get the full {{_revisions}} object for that document (using the 
 {{revs}} and {{open_revs}} parameters ndash; refer to [the TouchDB 
 writeup|https://github.com/couchbaselabs/TouchDB-iOS/wiki/Replication-Algorithm]
  or [Benoit's writeup|http://dataprotocols.org/couchdb-replication/] if you 
 need a refresher).
 So for example, let's say you've got a database full of 10,000 documents, and 
 you replicate using a batch size of 500 (batch sizes are configurable in 
 PouchDB). The conversation for a single batch basically looks like this:
 {code}
 - REPLICATOR: gimme 500 changes since seq X (1 GET request)
   - SOURCE: okay
 - REPLICATOR: gimme the _revs_diff for these 500 docs/_revs (1 POST request)
   - SOURCE: okay
 - repeat 500 times:
   - REPLICATOR: gimme the _revisions for doc n with _revs [...] (1 GET 
 request)
 - SOURCE: okay
 - REPLICATOR: here's a _bulk_docs with 500 documents (1 POST request)
 - TARGET: okay
 {code}
 See the problem here? That 500-loop, where we have to do a GET for each one 
 of 500 documents, is a lot of unnecessary back-and-forth, considering that 
 the replicator already knows what it needs before the loop starts. You can 
 parallelize, but if you assume a browser (e.g. for PouchDB), most browsers 
 only let you do ~8 simultaneous requests at once. Plus, there's latency and 
 HTTP headers to consider. So overall, it's not cool.
 So why do we even need to do the separate requests? Shouldn't {{_all_docs}} 
 be good enough? Turns out it's not, because we need this special 
 {{_revisions}} object.
 For example, consider a document {{'foo'}} with 10 revisions. You may compact 
 the database, in which case revisions {{1-x}} through {{9-x}} are no longer 
 retrievable. However, if you query using {{revs}} and {{open_revs}}, those 
 rev IDs are still available:
 {code}
 $ curl 'http://nolan.iriscouch.com/test/foo?revs=trueopen_revs=all'
 {
   _id: foo,
   _rev: 10-c78e199ad5e996b240c9d6482907088e,
   _revisions: {
 start: 10,
 ids: [
   c78e199ad5e996b240c9d6482907088e,
   f560283f1968a05046f0c38e468006bb,
   0091198554171c632c27c8342ddec5af,
   e0a023e2ea59db73f812ad773ea08b17,
   65d7f8b8206a244035edd9f252f206ad,
   069d1432a003c58bdd23f01ff80b718f,
   d21f26bb604b7fe9eba03ce4562cf37b,
   31d380f99a6e54875855e1c24469622d,
   3b4791360024426eadafe31542a2c34b,
   967a00dff5e02add41819138abb3284d
 ]
   }
 }
 {code}
 And in the replication algorithm, _this full \_revisions object is required_ 
 at the point when you copy the document from one database to another, which 
 is accomplished with a POST to {{_bulk_docs}} using {{new_edits=false}}. If 
 you don't have the full {{_revisions}} object, CouchDB accepts the new 
 revision, but considers it to be a conflict. (The exception is with 
 generation-1 documents, since they have no history, so as it says in the 
 TouchDB writeup, you can safely just use {{_all_docs}} as an optimization for 
 such documents.)
 And unfortunately, this {{_revision}} object is only available from the {{GET 
 /:dbid/:docid}} endpoint. Trust me; I've tried the other APIs. You can't get 
 it anywhere else.
 This is a huge problem, especially in PouchDB where we often have to deal 
 with CORS, meaning the number of HTTP requests is doubled. So for those 500 
 GETs, it's an extra 500 OPTIONs, which is just unacceptable.
 Replication does not have to be slow. While we were experimenting with ways 
 of fetching documents in bulk, we tried a technique that just relied on using 
 {{_changes}} with {{include_docs=true}} 
 ([|\#2472|https://github.com/pouchdb/pouchdb/pull/2472]). This pushed 
 conflicts into the target database, but on the upside, you can sync ~95k 
 documents from npm's skimdb repository to the browser in less than 20 
 minutes! (See 

[jira] [Commented] (COUCHDB-2497) Deprecate /_replicate endpoint

2014-12-08 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14237949#comment-14237949
 ] 

Robert Newson commented on COUCHDB-2497:


-1, _replicate is the only way to trigger a replication that doesn't cause 
writes to a database, it's important. Especially as malformed replicator docs 
that cause crashing replications can pound the _replicator hard, introducing 
conflicts too.

_replicator being a database is the mistake, a serious operational one. 
_replicate should have added persistent:true as a flag, and all the 
persistence part should have been hidden.



 Deprecate /_replicate endpoint
 --

 Key: COUCHDB-2497
 URL: https://issues.apache.org/jira/browse/COUCHDB-2497
 Project: CouchDB
  Issue Type: Improvement
  Security Level: public(Regular issues) 
  Components: HTTP Interface, Replication
Reporter: Alexander Shorin

 We have two similar API to run replications. How about to reduce them single 
 one? We cannot just return HTTP 301 on /_replicate to /_replicator for POST 
 requests since in this case user must confirm the request submission whatever 
 that means. But we can just reroute requests internally or try to use 
 experimental [HTTP 308|http://tools.ietf.org/html/rfc7238] to deal with it.
 The motivation is the simplification of replication tasks management, since 
 it doesn't simply to cancel active temporary replication as like as it could 
 be done for persistent ones.
 Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2390) Fauxton config, admin sections considered dangerous in 2.0

2014-12-01 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14231109#comment-14231109
 ] 

Robert Newson commented on COUCHDB-2390:


Let's consider CoreOS's etcd for this. With config stored in etcd we can remove 
the .ini files completely; have a true cluster config not sum-of-nodes config.

 Fauxton config, admin sections considered dangerous in 2.0
 --

 Key: COUCHDB-2390
 URL: https://issues.apache.org/jira/browse/COUCHDB-2390
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: BigCouch, Fauxton
Reporter: Joan Touzet
Assignee: Ben Keen
Priority: Blocker

 In Fauxton today, there is are 2 sections to edit config-file settings and to 
 create new admins. Neither of these sections will work as intended in a 
 clustered setup.
 Any Fauxton session will necessarily be speaking to a single machine. The 
 config APIs and admin user info as exposed will only add that information to 
 a single node's .ini file.
 We should hide these features in Fauxton for now (short-term fix) and correct 
 the config /admin creation APIs to work correctly in a clustered setup 
 (medium-term fix).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (COUCHDB-2461) _info on a view results in badmatch error

2014-11-22 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-2461.

Resolution: Fixed

 _info on a view results in badmatch error
 -

 Key: COUCHDB-2461
 URL: https://issues.apache.org/jira/browse/COUCHDB-2461
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: BigCouch
Reporter: Robert Kowalski
Priority: Blocker
 Fix For: 2.0.0

 Attachments: Bildschirmfoto 2014-11-13 um 17.48.34.png


 the request http://localhost:5984/foo/_design/lala/_info will lead to a 
 timeout
 response is {error:badmatch,reason:
 {error,timeout}
 ,ref:400627190}
 there is a comment where robert containing a chatlog from couchdb-dev where 
 he found the bug



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-477) Add database uuid's

2014-11-16 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214022#comment-14214022
 ] 

Robert Newson commented on COUCHDB-477:
---

Yup. 

Sent from my iPhone



 Add database uuid's
 ---

 Key: COUCHDB-477
 URL: https://issues.apache.org/jira/browse/COUCHDB-477
 Project: CouchDB
  Issue Type: New Feature
Reporter: Robert Newson
 Attachments: 
 0001-add-uuid-to-database-on-creation-return-it-in-db_in.patch, 
 db_uuids.patch, db_uuids.patch, db_uuids_v2.patch


 Add a uuid to db_header to distinguish different databases that have the same 
 name (for example, by deleting and creating the same named database).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-1218) Better logger performance

2014-11-16 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214056#comment-14214056
 ] 

Robert Newson commented on COUCHDB-1218:


I'd close it as fixed / obsoleted by the introduction of lager, yes.

 Better logger performance
 -

 Key: COUCHDB-1218
 URL: https://issues.apache.org/jira/browse/COUCHDB-1218
 Project: CouchDB
  Issue Type: Improvement
Reporter: Filipe Manana
Assignee: Filipe Manana
 Attachments: 0001-Better-logger-performance.patch


 I made some experiments with OTP's disk_log module (available since 2001 at 
 least) to use it to manage the log file.
 It turns out I got better throughput by using it. Basically it adopts a 
 strategy similar to the asynchronous couch_file Damien described in this 
 thread:
 http://mail-archives.apache.org/mod_mbox/couchdb-dev/201106.mbox/%3c5c39fb5a-0aca-4ff9-bd90-2ebecf271...@apache.org%3E
 Here's a benchmark with relaximation, 50 writers, 100 readers, documents of 
 1Kb, delayed_commits set to false and 'info' log level (default):
 http://graphs.mikeal.couchone.com/#/graph/9e19f6d9eeb318c70cabcf67bc013c7f
 The reads got a better throughput (bottom graph, easier to visualize).
 The patch (also attached here), which has a descriptive comment, is at:
 https://github.com/fdmanana/couchdb/compare/logger_perf.patch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2434) No context passed when turning off delete database event listener

2014-11-04 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196784#comment-14196784
 ] 

Robert Newson commented on COUCHDB-2434:


We're not quite sure what you're referring to here, could you provide some 
sample output to clarify?

 No context passed when turning off delete database event listener
 -

 Key: COUCHDB-2434
 URL: https://issues.apache.org/jira/browse/COUCHDB-2434
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
Reporter: Michelle Phung

 No context passed when turning off delete database event listener. When 
 multiple views are created, all triggers for delete database are turned off, 
 from that point on, even if you need it to be on after. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (COUCHDB-2066) Don't allow stupid storage of passwords

2014-10-26 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson closed COUCHDB-2066.
--
Resolution: Won't Fix

Instead, passwords will be upgraded to the new scheme on next auth thanks to 
COUCHDB-1780. Administrators can disable sha1 entirely when COUCHDB-2068 lands.

 Don't allow stupid storage of passwords
 ---

 Key: COUCHDB-2066
 URL: https://issues.apache.org/jira/browse/COUCHDB-2066
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
Reporter: Isaac Z. Schlueter

 If a password_sha/salt combination is PUT into the _users db, wrap that up in 
 PBKDF2.
 Discussion:
 https://twitter.com/janl/status/434818855626502144
 https://twitter.com/izs/status/434835388213899264
 https://twitter.com/janl/status/434835614790586368



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2407) Database updates feed is broken

2014-10-25 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184301#comment-14184301
 ] 

Robert Newson commented on COUCHDB-2407:


It's not broken (at least, not on this evidence). You need to create the 
database manually. We can't automate it as we don't know when the cluster is 
fully joined.

 Database updates feed is broken
 ---

 Key: COUCHDB-2407
 URL: https://issues.apache.org/jira/browse/COUCHDB-2407
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Affects Versions: 2.0.0
Reporter: Alexander Shorin

 For current state of CouchDB 2.0 (not sure to which commit make a reference, 
 just for today) it acts very inconsistent:
 {code}
 http --json http://localhost:15984/_db_updates
 HTTP/1.1 404 Object Not Found
 Cache-Control: must-revalidate
 Content-Length: 58
 Content-Type: application/json
 Date: Sat, 25 Oct 2014 13:42:25 GMT
 Server: CouchDB/40c5c85 (Erlang OTP/17)
 X-Couch-Request-ID: 27e8ab2a
 X-CouchDB-Body-Time: 0
 {
 error: not_found, 
 reason: Database does not exist.
 }
 {code}
 Ok, there is no such database. But wait:
 {code}
 http --json 'http://localhost:15984/_db_updates?feed=eventsource'
 HTTP/1.1 400 Bad Request
 Cache-Control: must-revalidate
 Content-Length: 88
 Content-Type: application/json
 Date: Sat, 25 Oct 2014 13:39:59 GMT
 Server: CouchDB/40c5c85 (Erlang OTP/17)
 X-Couch-Request-ID: 3a5ca656
 X-CouchDB-Body-Time: 0
 {
 error: bad_request, 
 reason: Supported `feed` types: normal, continuous, longpoll
 }
 {code}
 The eventsource feed type is supported by CouchDB 1.x. Ok, let's try 
 suggested continuous one:
 {code}
 http --json 
 'http://localhost:15984/_db_updates?timeout=1000heartbeat=falsefeed=continuous'
 HTTP/1.1 400 Bad Request
 Cache-Control: must-revalidate
 Content-Length: 51
 Content-Type: application/json
 Date: Sat, 25 Oct 2014 13:50:59 GMT
 Server: CouchDB/40c5c85 (Erlang OTP/17)
 X-Couch-Request-ID: 6c560dc2
 X-CouchDB-Body-Time: 0
 {
 error: bad_request, 
 reason: invalid_integer
 }
 {code}
 The same request is correct for CouchDB 1.x. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2406) Unstable database sizes and update_seq values

2014-10-25 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184302#comment-14184302
 ] 

Robert Newson commented on COUCHDB-2406:


The values are derived as the sum of the values from one copy of each shard 
range. There are three copies of each by default, which are each slightly 
different values, hence the unstable response. Suggestions welcome, though note 
we don't really want to require an answer from every copy of every shard.

 Unstable database sizes and update_seq values
 -

 Key: COUCHDB-2406
 URL: https://issues.apache.org/jira/browse/COUCHDB-2406
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Database Core
Reporter: Alexander Shorin

 Each time requesting database info CouchDB 2.0 return different information 
 about database sizes and update seq while there is no any activity for 
 monitored database.
 Script to reproduce:
 {code}
 import requests
 import time
 dburl = 'http://localhost:15984/test'
 init_dbinfo = requests.get(dburl).json()
 diff_found = False
 while True:
 dbinfo = requests.get(dburl).json()
 for key, value in sorted(dbinfo.items()):
 if dbinfo[key] != init_dbinfo[key]:
 diff_found = True
 print(key)
 print('was:', init_dbinfo[key])
 print('now:', dbinfo[key])
 print('-' * 20)
 if diff_found:
 break
 time.sleep(1)
 {code}
 Example output:
 {code}
 data_size
 was: 25807939
 now: 25808590
 
 disk_size
 was: 7128
 now: 71329232
 
 sizes
 was: {'external': 0, 'file': 7128, 'active': 25807939}
 now: {'external': 0, 'file': 71329232, 'active': 25808590}
 
 update_seq
 was: [59238, 
 'g1FbeJzLYWBg4MhgTmHgz8tPSTV2MDQy1zMAQsMcoARTIkOS_P___7OSGBhkeuCqDNFUJSkAySR7mMKLuBU6gBTGwxRuwa0wAaSwHqbwJ06FeSxAkqEBSAHVzgcplpUmoHgBRPF-sOLZBBQfgCi-D1bMjDOcIIofQBRD3Lw4CwAxnFvL']
 now: [59238, 
 'g1FbeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7OSGBhkenCqSlIAkkn2MIUXcSt0ACmMhyncglthAkhhPUzhT5wK81iAJEMDkAKqnQ9SLCtNQPECiOL9YMWzCSg-AFF8H6yYmYDiBxDFEDcvzgIALrhbxw']
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2407) Database updates feed is broken

2014-10-25 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184310#comment-14184310
 ] 

Robert Newson commented on COUCHDB-2407:


We can't automate it as we don't know when the cluster is fully joined.

 Database updates feed is broken
 ---

 Key: COUCHDB-2407
 URL: https://issues.apache.org/jira/browse/COUCHDB-2407
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Affects Versions: 2.0.0
Reporter: Alexander Shorin

 For current state of CouchDB 2.0 (not sure to which commit make a reference, 
 just for today) it acts very inconsistent:
 {code}
 http --json http://localhost:15984/_db_updates
 HTTP/1.1 404 Object Not Found
 Cache-Control: must-revalidate
 Content-Length: 58
 Content-Type: application/json
 Date: Sat, 25 Oct 2014 13:42:25 GMT
 Server: CouchDB/40c5c85 (Erlang OTP/17)
 X-Couch-Request-ID: 27e8ab2a
 X-CouchDB-Body-Time: 0
 {
 error: not_found, 
 reason: Database does not exist.
 }
 {code}
 Ok, there is no such database. But wait:
 {code}
 http --json 'http://localhost:15984/_db_updates?feed=eventsource'
 HTTP/1.1 400 Bad Request
 Cache-Control: must-revalidate
 Content-Length: 88
 Content-Type: application/json
 Date: Sat, 25 Oct 2014 13:39:59 GMT
 Server: CouchDB/40c5c85 (Erlang OTP/17)
 X-Couch-Request-ID: 3a5ca656
 X-CouchDB-Body-Time: 0
 {
 error: bad_request, 
 reason: Supported `feed` types: normal, continuous, longpoll
 }
 {code}
 The eventsource feed type is supported by CouchDB 1.x. Ok, let's try 
 suggested continuous one:
 {code}
 http --json 
 'http://localhost:15984/_db_updates?timeout=1000heartbeat=falsefeed=continuous'
 HTTP/1.1 400 Bad Request
 Cache-Control: must-revalidate
 Content-Length: 51
 Content-Type: application/json
 Date: Sat, 25 Oct 2014 13:50:59 GMT
 Server: CouchDB/40c5c85 (Erlang OTP/17)
 X-Couch-Request-ID: 6c560dc2
 X-CouchDB-Body-Time: 0
 {
 error: bad_request, 
 reason: invalid_integer
 }
 {code}
 The same request is correct for CouchDB 1.x. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2407) Database updates feed is broken

2014-10-25 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184314#comment-14184314
 ] 

Robert Newson commented on COUCHDB-2407:


the _db_updates in 2.0 is improved, you can query for historical values, it's 
no longer an ephemeral event stream.

Agree we need to fix up error messages, we never normally expose a cluster 
before this db is created.

 Database updates feed is broken
 ---

 Key: COUCHDB-2407
 URL: https://issues.apache.org/jira/browse/COUCHDB-2407
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Affects Versions: 2.0.0
Reporter: Alexander Shorin

 For current state of CouchDB 2.0 (not sure to which commit make a reference, 
 just for today) it acts very inconsistent:
 {code}
 http --json http://localhost:15984/_db_updates
 HTTP/1.1 404 Object Not Found
 Cache-Control: must-revalidate
 Content-Length: 58
 Content-Type: application/json
 Date: Sat, 25 Oct 2014 13:42:25 GMT
 Server: CouchDB/40c5c85 (Erlang OTP/17)
 X-Couch-Request-ID: 27e8ab2a
 X-CouchDB-Body-Time: 0
 {
 error: not_found, 
 reason: Database does not exist.
 }
 {code}
 Ok, there is no such database. But wait:
 {code}
 http --json 'http://localhost:15984/_db_updates?feed=eventsource'
 HTTP/1.1 400 Bad Request
 Cache-Control: must-revalidate
 Content-Length: 88
 Content-Type: application/json
 Date: Sat, 25 Oct 2014 13:39:59 GMT
 Server: CouchDB/40c5c85 (Erlang OTP/17)
 X-Couch-Request-ID: 3a5ca656
 X-CouchDB-Body-Time: 0
 {
 error: bad_request, 
 reason: Supported `feed` types: normal, continuous, longpoll
 }
 {code}
 The eventsource feed type is supported by CouchDB 1.x. Ok, let's try 
 suggested continuous one:
 {code}
 http --json 
 'http://localhost:15984/_db_updates?timeout=1000heartbeat=falsefeed=continuous'
 HTTP/1.1 400 Bad Request
 Cache-Control: must-revalidate
 Content-Length: 51
 Content-Type: application/json
 Date: Sat, 25 Oct 2014 13:50:59 GMT
 Server: CouchDB/40c5c85 (Erlang OTP/17)
 X-Couch-Request-ID: 6c560dc2
 X-CouchDB-Body-Time: 0
 {
 error: bad_request, 
 reason: invalid_integer
 }
 {code}
 The same request is correct for CouchDB 1.x. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (COUCHDB-2386) Cant undelete documents in master while passing _rev

2014-10-14 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson updated COUCHDB-2386:
---
Summary: Cant undelete documents in master while passing _rev  (was: Cant 
undelete documents in master)

Clarifying that you can undelete in master but not if you pass a _rev value.

 Cant undelete documents in master while passing _rev
 

 Key: COUCHDB-2386
 URL: https://issues.apache.org/jira/browse/COUCHDB-2386
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Database Core
Reporter: Dale Harvey

 Basic commands to reproduce
 https://gist.github.com/daleharvey/cd5f058b20e92b52d80c
 There was a bug tracking this fix in previous versions of Couch, was 
 reintroduced with the cluster merge
 https://issues.apache.org/jira/browse/COUCHDB-292



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-1521) multipart parser gets multiple attachments mixed up

2014-10-09 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165692#comment-14165692
 ] 

Robert Newson commented on COUCHDB-1521:


Jan enhanced multipart so that each part can have headers (including 
filenames), which I think was the fix for mixups?

 multipart parser gets multiple attachments mixed up
 ---

 Key: COUCHDB-1521
 URL: https://issues.apache.org/jira/browse/COUCHDB-1521
 Project: CouchDB
  Issue Type: Bug
  Components: HTTP Interface
Affects Versions: 1.2
Reporter: Jens Alfke
Assignee: Randall Leeds

 When receiving a document PUT in multipart format, CouchDB gets the 
 attachments and MIME parts mixed up. Instead of looking at the headers of a 
 MIME part to identify which attachment it is (most likely by using the 
 'filename' property of the 'Content-Disposition:' header), it processes the 
 attachments according to the order in which their metadata objects appear in 
 the JSON body's '_attachments:' object.
 The problem with this is that JSON objects (dictionaries) are _not_ ordered 
 collections. I know that Erlang's implementation of them (as linked lists of 
 key/value pairs) happens to be ordered, and I think some JavaScript 
 implementations have the side effect of preserving order; but in many 
 languages these are implemented as hash tables and genuinely unordered.
 This means that when a program written in such a language converts a native 
 object to JSON, it has no control over (and probably no knowledge of) the 
 order in which the keys of the JSON object are written out. This makes it 
 impossible to then write the attachments in the same order.
 The only workaround seems to be for the program to implement its own custom 
 JSON encoder just so that it can write object keys in a known order (probably 
 sorted), which then enables it to write the attachment bodies in the same 
 order.
 NOTE: This is the flip side of COUCHDB-1368 which I filed last year; that bug 
 has to do with the same ordering issue when CouchDB _generates_ multipart 
 responses (and presents similar problems for clients not written in Erlang.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (COUCHDB-2367) Eliminate plaintext passwords altogether

2014-10-08 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson reassigned COUCHDB-2367:
--

Assignee: Javier Candeira

all yours!

 Eliminate plaintext passwords altogether
 

 Key: COUCHDB-2367
 URL: https://issues.apache.org/jira/browse/COUCHDB-2367
 Project: CouchDB
  Issue Type: Improvement
  Security Level: public(Regular issues) 
  Components: Database Core
Reporter: Javier Candeira
Assignee: Javier Candeira

 In discussion about https://issues.apache.org/jira/browse/COUCHDB-2364, 
 rnewson and candeira agreed on:
 +rnewson Maybe spent a little more time on the idea that we remove support 
 for plaintext passwords entirely?
 +rnewson I dislike the hash-on-startup thing.
 +rnewson we could insist that you set up admins via PUT _config
 +rnewson and remove the hash_unhashed_admins function, and also ignore 
 non-hashed lines in config
 +rnewson couchdb 2.0 could simply require the hashed version from the start 
 (and we'd supply a hashing tool akin to htpasswd in httpd), or 
  kandinski what about PUT _config, it would still exist?
 +rnewson absolutely, yes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2343) /_config/admins/username fails on master

2014-09-30 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153219#comment-14153219
 ] 

Robert Newson commented on COUCHDB-2343:


note that it's specifically that the 'salt' value is generated at each site and 
used in the cookie verification. So basic auth will work, but cookie auth fails 
if you bounce around the cluster.

 /_config/admins/username fails on master
 

 Key: COUCHDB-2343
 URL: https://issues.apache.org/jira/browse/COUCHDB-2343
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Affects Versions: 2.0.0
Reporter: Joan Touzet
Priority: Blocker
  Labels: auth

 In a multi-node setup, calling _config/admins/username to create an admin 
 user fails to correctly configure a cluster with a new administrator. This 
 fails for two reasons:
 1) The call is only processed on a single node, and the admin entry is not 
 replicated
 2) Even if the call is repeated on all nodes manually, the hashes will be 
 different on each node, which will cause cookie failure when attempting to 
 authenticate via other machines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2334) Metadata db cassim does not exist

2014-09-22 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143183#comment-14143183
 ] 

Robert Newson commented on COUCHDB-2334:


None of those databases can be generated automatically any more, as they need 
to be created *after* the initial joining up of the cluster, an event we can't 
detect.

On port 5986, we can (and do) generate a _users and _replicator database since 
they are node local. The cassim database needs to be clustered, and so there's 
no node local one.

I agree with the ticket in general, though, it's not sufficient to merely log 
that this database doesn't exist, especially given that, unless other 
configuration changes are made, there's no penalty for not having it.

I'm sure Russell would point out that the migration of _security objects from 
shard files to a clustered database is still in progress and that we must 
finish the job. Alex is quite right to mention the oddness of the middle step.


 Metadata db cassim does not exist
 ---

 Key: COUCHDB-2334
 URL: https://issues.apache.org/jira/browse/COUCHDB-2334
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: BigCouch
Reporter: Alexander Shorin

 And so happens every 5 minutes:
 {code}
 2014-09-20 04:00:06.786 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:05:06.788 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:10:06.790 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:15:06.792 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:20:06.794 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:25:06.796 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:30:06.798 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:35:06.800 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:40:06.802 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:45:06.804 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:50:06.806 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:55:06.808 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 05:00:06.810 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 05:05:06.812 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 05:10:06.814 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2338) Reproduceable document revision hash calculation

2014-09-22 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143266#comment-14143266
 ] 

Robert Newson commented on COUCHDB-2338:


The completely random assertion is false in almost all cases, we've had MD5 
for attachments for a very long while. The random value is only generated for 
databases that predate that. The code you cite shows that the MD5's are mixed 
in where available.

Sidenote: we should be generalising all the checksumming anyway but with a view 
to removing MD5.

 Reproduceable document revision hash calculation
 

 Key: COUCHDB-2338
 URL: https://issues.apache.org/jira/browse/COUCHDB-2338
 Project: CouchDB
  Issue Type: Improvement
  Security Level: public(Regular issues) 
  Components: Database Core
Reporter: Alexander Shorin

 Current document revision hash implementation is very Erlang-specific:
 {code}
 new_revid(#doc{body=Body,revs={OldStart,OldRevs},
 atts=Atts,deleted=Deleted}) -
 case [{N, T, M} || #att{name=N,type=T,md5=M} - Atts, M =/= ] of
 Atts2 when length(Atts) =/= length(Atts2) -
 % We must have old style non-md5 attachments
 ?l2b(integer_to_list(couch_util:rand32()));
 Atts2 -
 OldRev = case OldRevs of [] - 0; [OldRev0|_] - OldRev0 end,
 couch_util:md5(term_to_binary([Deleted, OldStart, OldRev, Body, 
 Atts2]))
 end.
 {code}
 All the bits in code above are trivial for every programming language except 
 {{term_to_binary}} function implementation: to make it right you need dive 
 deeper into Erlang. I have nothing against it, Erlang is cool, but this 
 implementation specifics makes whole idea to reproduce document revision as 
 untrivial complex operation.
 Rationale: you want to build CouchDB compatible storage on different from 
 Erlang technology stack that will sync with CouchDB without worry about 
 non-matched revisions for the same content with the same modification history 
 done in different compatible storages.
 P.S. Oh, yes, if you updates attachmets (add/del) revision becomes completely 
 random. Moreover, if you just updates attachment for document there is some 
 specific about revision calculation I don't recall now, but that would be 
 easily notice by looking what the specified function takes on call.
 P.P.S. via https://twitter.com/janl/status/514019496110333952



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2334) Metadata db cassim does not exist

2014-09-22 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143742#comment-14143742
 ] 

Robert Newson commented on COUCHDB-2334:


I think we should address the initial cluster startup story directly. I'd be 
quite happy to see a requirement to create the metadata db before the clustered 
interface becomes usable. That is, all operations simply fail until the cluster 
is configured properly, and we don't need to log it.

At the risk of opening a giant can of worms, I don't think cassim is 
immediately obvious to people that this is a special database that replaces the 
_security objects. While we were free, at Cloudant, to choose cute names, I 
think CouchDB ought to be a bit more obvious (if boring) in its component 
names. I suggest _meta for the database name, at least (since we do plan to 
store more than _security documents) but it's just one suggestion. I don't 
think we *need* to rename the cassim application unless someone feels strongly 
about that (I think we can pitch the open sesame / security thing just fine).

Anyway, the central idea here is to define a pre-production state for the 
cluster, where the administrator has to complete some steps before it will 
work. This could be a simple endpoint that the administrator uses to say that, 
yes, actually the 'nodes' db is now fully populated. When that happens, we can 
create all three dbs, _users, _replicator, _meta (or whatever it's called).


 Metadata db cassim does not exist
 ---

 Key: COUCHDB-2334
 URL: https://issues.apache.org/jira/browse/COUCHDB-2334
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: BigCouch
Reporter: Alexander Shorin

 And so happens every 5 minutes:
 {code}
 2014-09-20 04:00:06.786 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:05:06.788 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:10:06.790 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:15:06.792 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:20:06.794 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:25:06.796 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:30:06.798 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:35:06.800 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:40:06.802 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:45:06.804 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:50:06.806 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:55:06.808 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 05:00:06.810 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 05:05:06.812 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 05:10:06.814 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2334) Metadata db cassim does not exist

2014-09-22 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144049#comment-14144049
 ] 

Robert Newson commented on COUCHDB-2334:


I'm thinking we need something more than that. e.g;

POST /_setup_complete

and that would be the signal that the cluster is fully joined, and we'd do 
whatever that version of couchdb does when that happens for the first time. For 
2.0, that's ensure that's;

1) _users, _replicator and _meta are created
2) ensure cookie secret is set to the same value on all nodes
3) a third thing if there is one

N.B _setup_complete is a deliberately terrible example to force us to think of 
a better one also it's midnight

 Metadata db cassim does not exist
 ---

 Key: COUCHDB-2334
 URL: https://issues.apache.org/jira/browse/COUCHDB-2334
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: BigCouch
Reporter: Alexander Shorin

 And so happens every 5 minutes:
 {code}
 2014-09-20 04:00:06.786 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:05:06.788 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:10:06.790 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:15:06.792 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:20:06.794 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:25:06.796 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:30:06.798 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:35:06.800 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:40:06.802 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:45:06.804 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:50:06.806 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 04:55:06.808 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 05:00:06.810 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 05:05:06.812 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 2014-09-20 05:10:06.814 [error] node1@127.0.0.1 0.341.0 Metadata db 
 cassim does not exist
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (COUCHDB-1415) Re-insering a document silently fails after compact is executed

2014-09-17 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-1415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-1415.

Resolution: Fixed

Sorry for the poor bug tracking here. 

this was fixed in 
https://github.com/apache/couchdb-couch/commit/39df1d5e78a3ffd855cc9c2a6aa257237dda
 as part of the cloudant merge.

It'll be in 2.0 as the ticket says. yay.

 Re-insering a document silently fails after compact is executed
 ---

 Key: COUCHDB-1415
 URL: https://issues.apache.org/jira/browse/COUCHDB-1415
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.1.1
 Environment: Tested on multiple linux platforms
Reporter: Viktor Szabo
Assignee: Paul Joseph Davis
 Fix For: 2.0.0

 Attachments: patch


 When a document is re-inserted after a compact operation using the same 
 contents it was originally created, the insert operation is silently ignored, 
 leaving the client unaware of the fact it's document is not available in the 
 database.
 Can be reproduced using the following sequence of steps:
 alias curl='curl -H Content-Type: application/json'
 url=http://localhost:5984/database;
 1 curl -X PUT $url
 2 curl -X POST $url -d '{_id: bug, key: value}'
 3 curl -X DELETE $url/bug?rev=1-59414e77c768bc202142ac82c2f129de
 4 curl -X POST $url/_compact
 5 curl -X POST $url -d '{_id: bug, key: value}'
 6 curl -X GET $url/bug
   (bug here)
 1 {ok:true}
   201
 2 [{ok:true,id:bug,rev:1-59414e77c768bc202142ac82c2f129de}]
   201
 3 {ok:true,id:bug,rev:2-9b2e3bcc3752a3a952a3570b2ed4d27e}
   200
 4 {ok:true}
   202
 5 [{ok:true,id:bug,rev:1-59414e77c768bc202142ac82c2f129de}]
   201
 6 {error:not_found,reason:deleted}
   404
 CouchDB shouldn't report ok on step 5 and then go on to claim that the doc 
 is deleted. Also, it seems to work on second try:
 7 curl -X POST $url -d '{_id: bug, key: value}'
 8 curl -X GET $url/bug
 7 {ok:true,id:bug,rev:3-674f864b73df1c80925e48436e21d550}
   201
 8 {_id:bug,_rev:3-674f864b73df1c80925e48436e21d550,key:value}
   200



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2324) Fix N in dev/run script

2014-09-10 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128275#comment-14128275
 ] 

Robert Newson commented on COUCHDB-2324:


+1

 Fix N in dev/run script
 ---

 Key: COUCHDB-2324
 URL: https://issues.apache.org/jira/browse/COUCHDB-2324
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: BigCouch
Reporter: Mike Wallace
Priority: Trivial
 Fix For: 2.0.0


 Changing the N value in dev/run does not change the number of dev nodes that 
 are spun up. We should fix this and also make N a command line option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (COUCHDB-2322) Bugs in process limit counts in couch_proc_manager

2014-09-06 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-2322.

   Resolution: Fixed
Fix Version/s: 2.0.0

 Bugs in process limit counts in couch_proc_manager
 --

 Key: COUCHDB-2322
 URL: https://issues.apache.org/jira/browse/COUCHDB-2322
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Database Core
Reporter: Paul Joseph Davis
 Fix For: 2.0.0


 We found a number of bugs in the OS process limit thresholds as currently 
 implemented in couch_proc_manager.erl. So I'm fixing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (COUCHDB-2321) It's possible to delete config section while it still has some options (UI bug)

2014-09-06 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson updated COUCHDB-2321:
---
Comment: was deleted

(was: Commit db58e794f937a52b6b61c964942e56afa7d03d8b in couchdb-couch's branch 
refs/heads/master from [~paul.joseph.davis]
[ https://git-wip-us.apache.org/repos/asf?p=couchdb-couch.git;h=db58e79 ]

Fix bugs with couch_proc_manager limits

This fixes the couch_proc_manager limit counting by rearranging the increment 
and decrements when processes are created and destroyed. It ensures that each 
time we remove a process from the ets table that we decrement appropriately.

For incrementing, things are a bit more complicated in that we need to 
increment before inserting to the table. This is so that our hard limit applies 
even if one of our asynchronous spawn calls is opening a new process. This is 
accomplished by incrementing the counter and storing the async open call 
information in a new ets table. If the open is successful the counter is left 
untouched. If the open fails then we need to decrement the counter.

This also simplifies starting waiting clients when a processes is either 
returned, exits, or fails to start by isolating the logic and calling it in 
each place as necessary.

Closes COUCHDB-2321
)

 It's possible to delete config section while it still has some options (UI 
 bug)
 ---

 Key: COUCHDB-2321
 URL: https://issues.apache.org/jira/browse/COUCHDB-2321
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Fauxton
Reporter: Alexander Shorin
Assignee: Robert Kowalski

 1. Create a new section abc with option bar = baz
 2. Add another option to section abc like boo = foo
 3. Delete option bar. You'll see the following picture:
 http://i.imgur.com/SCKo7bk.png



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (COUCHDB-2026) JSONP responses should be sent with a application/javascript Content-type

2014-09-03 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson updated COUCHDB-2026:
---
Fix Version/s: 2.0.0

 JSONP responses should be sent with a application/javascript Content-type
 ---

 Key: COUCHDB-2026
 URL: https://issues.apache.org/jira/browse/COUCHDB-2026
 Project: CouchDB
  Issue Type: Sub-task
  Components: HTTP Interface
Reporter: Hank Knight
Assignee: Robert Kowalski
 Fix For: 1.1.1, 1.2, 2.0.0


 The Content-Type header for JSONP should be application/javascript
 While the content-type of text/javascript is widely used, it is obsolete 
 and may not be supported by future browsers.
 See:
 http://tools.ietf.org/html/rfc4329
 http://www.rfc-editor.org/rfc/rfc4329.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (COUCHDB-2026) JSONP responses should be sent with a application/javascript Content-type

2014-09-03 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson updated COUCHDB-2026:
---

JIRA refuses to let me remove 1.1.1, 1.2  from Fixed Versions because they've 
been archived, but this is not fixed in those versions. We should really not 
let users fill in this field.

 JSONP responses should be sent with a application/javascript Content-type
 ---

 Key: COUCHDB-2026
 URL: https://issues.apache.org/jira/browse/COUCHDB-2026
 Project: CouchDB
  Issue Type: Sub-task
  Components: HTTP Interface
Reporter: Hank Knight
Assignee: Robert Kowalski
 Fix For: 1.1.1, 1.2, 2.0.0


 The Content-Type header for JSONP should be application/javascript
 While the content-type of text/javascript is widely used, it is obsolete 
 and may not be supported by future browsers.
 See:
 http://tools.ietf.org/html/rfc4329
 http://www.rfc-editor.org/rfc/rfc4329.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (COUCHDB-2026) JSONP responses should be sent with a application/javascript Content-type

2014-09-03 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-2026.

Resolution: Fixed

 JSONP responses should be sent with a application/javascript Content-type
 ---

 Key: COUCHDB-2026
 URL: https://issues.apache.org/jira/browse/COUCHDB-2026
 Project: CouchDB
  Issue Type: Sub-task
  Components: HTTP Interface
Reporter: Hank Knight
Assignee: Robert Kowalski
 Fix For: 2.0.0, 1.1.1, 1.2


 The Content-Type header for JSONP should be application/javascript
 While the content-type of text/javascript is widely used, it is obsolete 
 and may not be supported by future browsers.
 See:
 http://tools.ietf.org/html/rfc4329
 http://www.rfc-editor.org/rfc/rfc4329.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (COUCHDB-2310) Add a bulk API for revs open_revs

2014-08-31 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116912#comment-14116912
 ] 

Robert Newson commented on COUCHDB-2310:


Agreed, a new endpoint and clean 404 response for detection.

 Add a bulk API for revs  open_revs
 ---

 Key: COUCHDB-2310
 URL: https://issues.apache.org/jira/browse/COUCHDB-2310
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Nolan Lawson

 CouchDB replication is too slow.
 And what makes it so slow is that it's just so unnecessarily chatty. During 
 replication, you have to do a separate GET for each individual document, in 
 order to get the full {{_revisions}} object for that document (using the 
 {{revs}} and {{open_revs}} parameters ndash; refer to [the TouchDB 
 writeup|https://github.com/couchbaselabs/TouchDB-iOS/wiki/Replication-Algorithm]
  or [Benoit's writeup|http://dataprotocols.org/couchdb-replication/] if you 
 need a refresher).
 So for example, let's say you've got a database full of 10,000 documents, and 
 you replicate using a batch size of 500 (batch sizes are configurable in 
 PouchDB). The conversation for a single batch basically looks like this:
 {code}
 - REPLICATOR: gimme 500 changes since seq X (1 GET request)
   - SOURCE: okay
 - REPLICATOR: gimme the _revs_diff for these 500 docs/_revs (1 POST request)
   - SOURCE: okay
 - repeat 500 times:
   - REPLICATOR: gimme the _revisions for doc n with _revs [...] (1 GET 
 request)
 - SOURCE: okay
 - REPLICATOR: here's a _bulk_docs with 500 documents (1 POST request)
 - TARGET: okay
 {code}
 See the problem here? That 500-loop, where we have to do a GET for each one 
 of 500 documents, is a lot of unnecessary back-and-forth, considering that 
 the replicator already knows what it needs before the loop starts. You can 
 parallelize, but if you assume a browser (e.g. for PouchDB), most browsers 
 only let you do ~8 simultaneous requests at once. Plus, there's latency and 
 HTTP headers to consider. So overall, it's not cool.
 So why do we even need to do the separate requests? Shouldn't {{_all_docs}} 
 be good enough? Turns out it's not, because we need this special 
 {{_revisions}} object.
 For example, consider a document {{'foo'}} with 10 revisions. You may compact 
 the database, in which case revisions {{1-x}} through {{9-x}} are no longer 
 retrievable. However, if you query using {{revs}} and {{open_revs}}, those 
 rev IDs are still available:
 {code}
 $ curl 'http://nolan.iriscouch.com/test/foo?revs=trueopen_revs=all'
 {
   _id: foo,
   _rev: 10-c78e199ad5e996b240c9d6482907088e,
   _revisions: {
 start: 10,
 ids: [
   c78e199ad5e996b240c9d6482907088e,
   f560283f1968a05046f0c38e468006bb,
   0091198554171c632c27c8342ddec5af,
   e0a023e2ea59db73f812ad773ea08b17,
   65d7f8b8206a244035edd9f252f206ad,
   069d1432a003c58bdd23f01ff80b718f,
   d21f26bb604b7fe9eba03ce4562cf37b,
   31d380f99a6e54875855e1c24469622d,
   3b4791360024426eadafe31542a2c34b,
   967a00dff5e02add41819138abb3284d
 ]
   }
 }
 {code}
 And in the replication algorithm, _this full \_revisions object is required_ 
 at the point when you copy the document from one database to another, which 
 is accomplished with a POST to {{_bulk_docs}} using {{new_edits=false}}. If 
 you don't have the full {{_revisions}} object, CouchDB accepts the new 
 revision, but considers it to be a conflict. (The exception is with 
 generation-1 documents, since they have no history, so as it says in the 
 TouchDB writeup, you can safely just use {{_all_docs}} as an optimization for 
 such documents.)
 And unfortunately, this {{_revision}} object is only available from the {{GET 
 /:dbid/:docid}} endpoint. Trust me; I've tried the other APIs. You can't get 
 it anywhere else.
 This is a huge problem, especially in PouchDB where we often have to deal 
 with CORS, meaning the number of HTTP requests is doubled. So for those 500 
 GETs, it's an extra 500 OPTIONs, which is just unacceptable.
 Replication does not have to be slow. While we were experimenting with ways 
 of fetching documents in bulk, we tried a technique that just relied on using 
 {{_changes}} with {{include_docs=true}} 
 ([|\#2472|https://github.com/pouchdb/pouchdb/pull/2472]). This pushed 
 conflicts into the target database, but on the upside, you can sync ~95k 
 documents from npm's skimdb repository to the browser in less than 20 
 minutes! (See [npm-browser.com|http://npm-browser.com] for a demo.)
 What an amazing story we could tell about the beauty of CouchDB replication, 
 if only this trick actually worked!
 My proposal is a simple one: just add the {{revs}} and {{open_revs}} options 
 to {{_all_docs}}. Presumably this would be aligned 

[jira] [Commented] (COUCHDB-2310) Add a bulk API for revs open_revs

2014-08-30 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116335#comment-14116335
 ] 

Robert Newson commented on COUCHDB-2310:


Great writeup!

My first thought is to enhance the POST form of /dbname/_all_docs. Currently it 
expects {keys: []} where the keys are doc _id's. This is because _all_docs 
apes the view API.

Here's my suggestion;

{docs: [ {id: foo,open_revs: [1-foo, 2-bar]}, {id:bar ... } ]}

The response will return the named document with all the specified open_revs in 
the order of the docs array. Each row will be a separate chunk, the server 
will not buffer the full response.

Deciding on a API is the hard part, I don't think the plumbing will be all that 
tricky.



 Add a bulk API for revs  open_revs
 ---

 Key: COUCHDB-2310
 URL: https://issues.apache.org/jira/browse/COUCHDB-2310
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Nolan Lawson

 CouchDB replication is too slow.
 And what makes it so slow is that it's just so unnecessarily chatty. During 
 replication, you have to do a separate GET for each individual document, in 
 order to get the full {{_revisions}} object for that document (using the 
 {{revs}} and {{open_revs}} parameters ndash; refer to [the TouchDB 
 writeup|https://github.com/couchbaselabs/TouchDB-iOS/wiki/Replication-Algorithm]
  or [Benoit's writeup|http://dataprotocols.org/couchdb-replication/] if you 
 need a refresher).
 So for example, let's say you've got a database full of 10,000 documents, and 
 you replicate using a batch size of 500 (batch sizes are configurable in 
 PouchDB). The conversation for a single batch basically looks like this:
 {code}
 - REPLICATOR: gimme 500 changes since seq X (1 GET request)
   - SOURCE: okay
 - REPLICATOR: gimme the _revs_diff for these 500 docs/_revs (1 POST request)
   - SOURCE: okay
 - repeat 500 times:
   - REPLICATOR: gimme the _revisions for doc n with _revs [...] (1 GET 
 request)
 - SOURCE: okay
 - REPLICATOR: here's a _bulk_docs with 500 documents (1 POST request)
 - TARGET: okay
 {code}
 See the problem here? That 500-loop, where we have to do a GET for each one 
 of 500 documents, is a lot of unnecessary back-and-forth, considering that 
 the replicator already knows what it needs before the loop starts. You can 
 parallelize, but if you assume a browser (e.g. for PouchDB), most browsers 
 only let you do ~8 simultaneous requests at once. Plus, there's latency and 
 HTTP headers to consider. So overall, it's not cool.
 So why do we even need to do the separate requests? Shouldn't {{_all_docs}} 
 be good enough? Turns out it's not, because we need this special 
 {{_revisions}} object.
 For example, consider a document {{'foo'}} with 10 revisions. You may compact 
 the database, in which case revisions {{1-x}} through {{9-x}} are no longer 
 retrievable. However, if you query using {{revs}} and {{open_revs}}, those 
 rev IDs are still available:
 {code}
 $ curl 'http://nolan.iriscouch.com/test/foo?revs=trueopen_revs=all'
 {
   _id: foo,
   _rev: 10-c78e199ad5e996b240c9d6482907088e,
   _revisions: {
 start: 10,
 ids: [
   c78e199ad5e996b240c9d6482907088e,
   f560283f1968a05046f0c38e468006bb,
   0091198554171c632c27c8342ddec5af,
   e0a023e2ea59db73f812ad773ea08b17,
   65d7f8b8206a244035edd9f252f206ad,
   069d1432a003c58bdd23f01ff80b718f,
   d21f26bb604b7fe9eba03ce4562cf37b,
   31d380f99a6e54875855e1c24469622d,
   3b4791360024426eadafe31542a2c34b,
   967a00dff5e02add41819138abb3284d
 ]
   }
 }
 {code}
 And in the replication algorithm, _this full \_revisions object is required_ 
 at the point when you copy the document from one database to another, which 
 is accomplished with a POST to {{_bulk_docs}} using {{new_edits=false}}. If 
 you don't have the full {{_revisions}} object, CouchDB accepts the new 
 revision, but considers it to be a conflict. (The exception is with 
 generation-1 documents, since they have no history, so as it says in the 
 TouchDB document, you can safely just use {{_all_docs}} as an optimization 
 for such documents.)
 And unfortunately, this {{_revision}} object is only available from the {{GET 
 /:dbid/:docid}} endpoint. Trust me; I've tried the other APIs. You can't get 
 it anywhere else.
 This is a huge problem, especially in PouchDB where we often have to deal 
 with CORS, meaning the number of HTTP requests is doubled. So for each of 
 those 500 GETs, it's an extra 500 OPTIONs, which is just unacceptable.
 Replication does not have to be slow. While we were experimenting with ways 
 of fetching documents in bulk, we tried a technique that just relied on using 
 {{_changes}} with {{include_docs=true}} 
 

[jira] [Commented] (COUCHDB-2310) Add a bulk API for revs open_revs

2014-08-30 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116413#comment-14116413
 ] 

Robert Newson commented on COUCHDB-2310:


We'll need an array so that we can ensure that docs are returned in the same 
order as the request. While CouchDB preserves the order of keys in an object 
when marshalling to and from JSON, other libraries don't (and they are 
obviously not required to).

We could arrange that docs is tested first, so you could get feature 
detection by posting;

{code}
{docs: [ whatever ], keys: {}}
{code}

Versions of CouchDB that don't look for docs will crash with that 400. Cheesy 
but acceptable?

 Add a bulk API for revs  open_revs
 ---

 Key: COUCHDB-2310
 URL: https://issues.apache.org/jira/browse/COUCHDB-2310
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Nolan Lawson

 CouchDB replication is too slow.
 And what makes it so slow is that it's just so unnecessarily chatty. During 
 replication, you have to do a separate GET for each individual document, in 
 order to get the full {{_revisions}} object for that document (using the 
 {{revs}} and {{open_revs}} parameters ndash; refer to [the TouchDB 
 writeup|https://github.com/couchbaselabs/TouchDB-iOS/wiki/Replication-Algorithm]
  or [Benoit's writeup|http://dataprotocols.org/couchdb-replication/] if you 
 need a refresher).
 So for example, let's say you've got a database full of 10,000 documents, and 
 you replicate using a batch size of 500 (batch sizes are configurable in 
 PouchDB). The conversation for a single batch basically looks like this:
 {code}
 - REPLICATOR: gimme 500 changes since seq X (1 GET request)
   - SOURCE: okay
 - REPLICATOR: gimme the _revs_diff for these 500 docs/_revs (1 POST request)
   - SOURCE: okay
 - repeat 500 times:
   - REPLICATOR: gimme the _revisions for doc n with _revs [...] (1 GET 
 request)
 - SOURCE: okay
 - REPLICATOR: here's a _bulk_docs with 500 documents (1 POST request)
 - TARGET: okay
 {code}
 See the problem here? That 500-loop, where we have to do a GET for each one 
 of 500 documents, is a lot of unnecessary back-and-forth, considering that 
 the replicator already knows what it needs before the loop starts. You can 
 parallelize, but if you assume a browser (e.g. for PouchDB), most browsers 
 only let you do ~8 simultaneous requests at once. Plus, there's latency and 
 HTTP headers to consider. So overall, it's not cool.
 So why do we even need to do the separate requests? Shouldn't {{_all_docs}} 
 be good enough? Turns out it's not, because we need this special 
 {{_revisions}} object.
 For example, consider a document {{'foo'}} with 10 revisions. You may compact 
 the database, in which case revisions {{1-x}} through {{9-x}} are no longer 
 retrievable. However, if you query using {{revs}} and {{open_revs}}, those 
 rev IDs are still available:
 {code}
 $ curl 'http://nolan.iriscouch.com/test/foo?revs=trueopen_revs=all'
 {
   _id: foo,
   _rev: 10-c78e199ad5e996b240c9d6482907088e,
   _revisions: {
 start: 10,
 ids: [
   c78e199ad5e996b240c9d6482907088e,
   f560283f1968a05046f0c38e468006bb,
   0091198554171c632c27c8342ddec5af,
   e0a023e2ea59db73f812ad773ea08b17,
   65d7f8b8206a244035edd9f252f206ad,
   069d1432a003c58bdd23f01ff80b718f,
   d21f26bb604b7fe9eba03ce4562cf37b,
   31d380f99a6e54875855e1c24469622d,
   3b4791360024426eadafe31542a2c34b,
   967a00dff5e02add41819138abb3284d
 ]
   }
 }
 {code}
 And in the replication algorithm, _this full \_revisions object is required_ 
 at the point when you copy the document from one database to another, which 
 is accomplished with a POST to {{_bulk_docs}} using {{new_edits=false}}. If 
 you don't have the full {{_revisions}} object, CouchDB accepts the new 
 revision, but considers it to be a conflict. (The exception is with 
 generation-1 documents, since they have no history, so as it says in the 
 TouchDB writeup, you can safely just use {{_all_docs}} as an optimization for 
 such documents.)
 And unfortunately, this {{_revision}} object is only available from the {{GET 
 /:dbid/:docid}} endpoint. Trust me; I've tried the other APIs. You can't get 
 it anywhere else.
 This is a huge problem, especially in PouchDB where we often have to deal 
 with CORS, meaning the number of HTTP requests is doubled. So for those 500 
 GETs, it's an extra 500 OPTIONs, which is just unacceptable.
 Replication does not have to be slow. While we were experimenting with ways 
 of fetching documents in bulk, we tried a technique that just relied on using 
 {{_changes}} with {{include_docs=true}} 
 ([|\#2472|https://github.com/pouchdb/pouchdb/pull/2472]). This pushed 
 conflicts into the target 

[jira] [Commented] (COUCHDB-1592) Free space check for automatic compaction doesn't follow symlinks

2014-08-28 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14114122#comment-14114122
 ] 

Robert Newson commented on COUCHDB-1592:


2.0.0 is the next release after 1.6.1. We'll backport if that changes, though 
it's hard to see how it could.

 Free space check for automatic compaction doesn't follow symlinks
 -

 Key: COUCHDB-1592
 URL: https://issues.apache.org/jira/browse/COUCHDB-1592
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.2
Reporter: Nils Breunese
 Fix For: 2.0.0


 We've got a problem with automatic compaction not running due to low 
 diskspace according to CouchDB. According to our system administrators there 
 is more than enough space (more than twice the currently used space), but the 
 data directory is a symlink to the real data storage. It seems CouchDB is 
 checking the diskspace on the filesystem on which the symlink resides instead 
 of the diskspace on the linked filesystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (COUCHDB-1592) Free space check for automatic compaction doesn't follow symlinks

2014-08-23 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-1592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-1592.


   Resolution: Fixed
Fix Version/s: 2.0.0

 Free space check for automatic compaction doesn't follow symlinks
 -

 Key: COUCHDB-1592
 URL: https://issues.apache.org/jira/browse/COUCHDB-1592
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.2
Reporter: Nils Breunese
 Fix For: 2.0.0


 We've got a problem with automatic compaction not running due to low 
 diskspace according to CouchDB. According to our system administrators there 
 is more than enough space (more than twice the currently used space), but the 
 data directory is a symlink to the real data storage. It seems CouchDB is 
 checking the diskspace on the filesystem on which the symlink resides instead 
 of the diskspace on the linked filesystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2299) admin users are unable to login after upgrading to 1.6.0 when older password hashes are used

2014-08-21 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105515#comment-14105515
 ] 

Robert Newson commented on COUCHDB-2299:


I know how computers work and so I have made a fix of this here thankyou 
https://git-wip-us.apache.org/repos/asf?p=couchdb.git;h=5e46f3b

 admin users are unable to login after upgrading to 1.6.0 when older password 
 hashes are used
 

 Key: COUCHDB-2299
 URL: https://issues.apache.org/jira/browse/COUCHDB-2299
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Database Core
Affects Versions: 1.6.0
Reporter: Dave Cottlehuber
Priority: Blocker
 Fix For: 1.6.1


 # issue
 When a couch is upgraded to 1.6.0, and the config files contain an [admins] 
 section with non-PBKDF2 hashed passwords (old-style  1.3.1) then couchdb 
 will not let those admin users login.
 # reproduce
 - install 1.2.1 through 1.5.1 (tested those + 1.3.1 + 1.6.1-rc.3)
 - create a new admin user via futon
 - remove old binaries etc `rm -rf bin share lib` 
 - only dbs and .ini files remain (apart from log uri etc) 
 - install 1.6.0 (or 1-rc.3 with the fix for the raw/unhashed password fix) 
 - try to log in using admin via futon
 {code}
 2 [debug] [0.146.0] 'POST' /_session {1,1} from 94.136.7.161
 Headers: [{'Accept',application/json},
   {'Accept-Encoding',gzip,deflate},
   {'Accept-Language',en-US,en;q=0.8,de;q=0.6},
   {'Connection',keep-alive},
   {'Content-Length',25},
   {'Content-Type',application/x-www-form-urlencoded; charset=UTF-8},
   {'Cookie',AuthSession=},
   {Dnt,1},
   {'Host',130.211.98.121:5984},
   {Origin,http://130.211.98.121:5984},
   {'Referer',http://130.211.98.121:5984/_utils/},
   {'User-Agent',Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) 
 AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2129.0 Safari/537.36},
   {X-Requested-With,XMLHttpRequest}]
 [debug] [0.146.0] OAuth Params: []
 [debug] [0.146.0] Attempt Login: admin
 [debug] [0.117.0] DDocProc found for DDocKey: {_design/_auth,
  
 2-7837bd4a550c1a65ac96c258e83d8b8c}
 [debug] [0.171.0] OS Process #Port0.3041 Input  :: 
 [reset,{reduce_limit:true,timeout:5000}]
 [debug] [0.171.0] OS Process #Port0.3041 Output :: true
 [debug] [0.171.0] OS Process #Port0.3041 Input  :: 
 [ddoc,_design/_auth,
 [validate_doc_update],
 [{_id:,
 password_scheme:pbkdf2,
 iterations:10,roles:[_admin],
 salt:a755d787383cdc147808a3ce2326479e,
 password_scheme:simple,
 derived_key:77bc076166db06fd940540ea7dc9d181e7e44741,
 _revisions:{start:0,ids:[]}},
 null,
 {db:_users,name:null,roles:[_admin]},{}]]
 [debug] [0.171.0] OS Process #Port0.3041 Output :: {forbidden:doc.type 
 must be user}
 [debug] [0.146.0] Minor error in HTTP request: {forbidden,
   doc.type must be user}
 [debug] [0.146.0] Stacktrace: [{couch_db,update_doc,4,
  [{file,couch_db.erl},{line,432}]},
  {couch_httpd_auth,
  '-maybe_upgrade_password_hash/3-fun-0-',
  4,
  [{file,couch_httpd_auth.erl},
   {line,355}]},
  {couch_util,with_db,2,
  [{file,couch_util.erl},{line,443}]},
  {couch_httpd_auth,handle_session_req,1,
  [{file,couch_httpd_auth.erl},
   {line,275}]},
  {couch_httpd,handle_request_int,5,
  [{file,couch_httpd.erl},{line,318}]},
  {mochiweb_http,headers,5,
  [{file,mochiweb_http.erl},{line,94}]},
  {proc_lib,init_p_do_apply,3,
  [{file,proc_lib.erl},{line,227}]}]
 [info] [0.146.0] 94.136.7.161 - - POST /_session 403
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2295) Connection hangs on document update for multipart/related and transfer encoding chunked request

2014-08-17 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099967#comment-14099967
 ] 

Robert Newson commented on COUCHDB-2295:


oh that's naughty. We should throw a proper error earlier if it's a chunked 
transfer encoding, since we don't have the code to support it yet (or, you 
know, add that code).


 Connection hangs on document update for multipart/related and transfer 
 encoding chunked request
 ---

 Key: COUCHDB-2295
 URL: https://issues.apache.org/jira/browse/COUCHDB-2295
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Alexander Shorin

 Script to reproduce:
 {code}
 import pprint
 import requests
 body = [
 b'--996713c691ec4fd5b717ef2740893b78\r\n',
 b'Content-Type: application/json\r\n',
 b'\r\n',
 b'{_id: test,_attachments: {foo: {follows: true, 
 content_type: text/plain, length: 12}}}\r\n',
 b'--996713c691ec4fd5b717ef2740893b78\r\n',
 b'Content-Type: text/plain\r\n'
 b'Content-Disposition: attachment;filename=foo\r\n'
 b'Content-Length: 12\r\n'
 b'\r\n',
 b'Time to Relax!',
 b'--996713c691ec4fd5b717ef2740893b78--\r\n'
 ]
 url = 'http://localhost:5984/db/test'
 headers = {
 'Content-Type': 'multipart/related; 
 boundary=996713c691ec4fd5b717ef2740893b78'
 }
 resp = requests.put(url, headers=headers, data=iter(body))
 pprint.pprint(resp.json())
 {code}
 This runs a request:
 {code}
 PUT /db/test HTTP/1.1
 Host: localhost:5984
 Accept-Encoding: gzip, deflate
 Transfer-Encoding: chunked
 User-Agent: python-requests/2.3.0 CPython/3.4.1 Linux/3.15.5-gentoo
 Accept: */*
 Content-Type: multipart/related; boundary=996713c691ec4fd5b717ef2740893b78
 24
 --996713c691ec4fd5b717ef2740893b78
 20
 Content-Type: application/json
 2
 68
 {_id: test,_attachments: {foo: {follows: true, content_type: 
 text/plain, length: 14}}}
 24
 --996713c691ec4fd5b717ef2740893b78
 60
 Content-Type: text/plain
 Content-Disposition: attachment;filename=foo
 Content-Length: 12
 e
 Time to Relax!
 26
 --996713c691ec4fd5b717ef2740893b78--
 0
 {code}
 But connection hangs: CouchDB thinks that there have to more data while zero 
 length chunk had been send and doesn't reply with anything back to client 
 which had finished the request and awaits for the response.
 The problem could be fixed by specifying full Content-Length of multipart 
 body in request, which kills all the idea of chunked transfer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2295) Connection hangs on document update for multipart/related and transfer encoding chunked request

2014-08-17 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099970#comment-14099970
 ] 

Robert Newson commented on COUCHDB-2295:


To clarify, receive_request_data/2 is not the origin of the bug, it's just the 
first place that goes wrong given an unexpected input. The multipart request 
form was added for the replicator originally, and it always forms a correctly 
encoded request. The code to support chunked multipart requests is just not 
present currently, I believe.

 Connection hangs on document update for multipart/related and transfer 
 encoding chunked request
 ---

 Key: COUCHDB-2295
 URL: https://issues.apache.org/jira/browse/COUCHDB-2295
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Alexander Shorin

 Script to reproduce:
 {code}
 import pprint
 import requests
 body = [
 b'--996713c691ec4fd5b717ef2740893b78\r\n',
 b'Content-Type: application/json\r\n',
 b'\r\n',
 b'{_id: test,_attachments: {foo: {follows: true, 
 content_type: text/plain, length: 12}}}\r\n',
 b'--996713c691ec4fd5b717ef2740893b78\r\n',
 b'Content-Type: text/plain\r\n'
 b'Content-Disposition: attachment;filename=foo\r\n'
 b'Content-Length: 12\r\n'
 b'\r\n',
 b'Time to Relax!',
 b'--996713c691ec4fd5b717ef2740893b78--\r\n'
 ]
 url = 'http://localhost:5984/db/test'
 headers = {
 'Content-Type': 'multipart/related; 
 boundary=996713c691ec4fd5b717ef2740893b78'
 }
 resp = requests.put(url, headers=headers, data=iter(body))
 pprint.pprint(resp.json())
 {code}
 This runs a request:
 {code}
 PUT /db/test HTTP/1.1
 Host: localhost:5984
 Accept-Encoding: gzip, deflate
 Transfer-Encoding: chunked
 User-Agent: python-requests/2.3.0 CPython/3.4.1 Linux/3.15.5-gentoo
 Accept: */*
 Content-Type: multipart/related; boundary=996713c691ec4fd5b717ef2740893b78
 24
 --996713c691ec4fd5b717ef2740893b78
 20
 Content-Type: application/json
 2
 68
 {_id: test,_attachments: {foo: {follows: true, content_type: 
 text/plain, length: 14}}}
 24
 --996713c691ec4fd5b717ef2740893b78
 60
 Content-Type: text/plain
 Content-Disposition: attachment;filename=foo
 Content-Length: 12
 e
 Time to Relax!
 26
 --996713c691ec4fd5b717ef2740893b78--
 0
 {code}
 But connection hangs: CouchDB thinks that there have to more data while zero 
 length chunk had been send and doesn't reply with anything back to client 
 which had finished the request and awaits for the response.
 The problem could be fixed by specifying full Content-Length of multipart 
 body in request, which kills all the idea of chunked transfer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2295) Connection hangs on document update for multipart/related and transfer encoding chunked request

2014-08-17 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099971#comment-14099971
 ] 

Robert Newson commented on COUCHDB-2295:


for a quick fix, we can add is_integer(LenLeft) in each clause, to ensure an 
immediate process crash and a prompt failure to the user.

 Connection hangs on document update for multipart/related and transfer 
 encoding chunked request
 ---

 Key: COUCHDB-2295
 URL: https://issues.apache.org/jira/browse/COUCHDB-2295
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Alexander Shorin

 Script to reproduce:
 {code}
 import pprint
 import requests
 body = [
 b'--996713c691ec4fd5b717ef2740893b78\r\n',
 b'Content-Type: application/json\r\n',
 b'\r\n',
 b'{_id: test,_attachments: {foo: {follows: true, 
 content_type: text/plain, length: 12}}}\r\n',
 b'--996713c691ec4fd5b717ef2740893b78\r\n',
 b'Content-Type: text/plain\r\n'
 b'Content-Disposition: attachment;filename=foo\r\n'
 b'Content-Length: 12\r\n'
 b'\r\n',
 b'Time to Relax!',
 b'--996713c691ec4fd5b717ef2740893b78--\r\n'
 ]
 url = 'http://localhost:5984/db/test'
 headers = {
 'Content-Type': 'multipart/related; 
 boundary=996713c691ec4fd5b717ef2740893b78'
 }
 resp = requests.put(url, headers=headers, data=iter(body))
 pprint.pprint(resp.json())
 {code}
 This runs a request:
 {code}
 PUT /db/test HTTP/1.1
 Host: localhost:5984
 Accept-Encoding: gzip, deflate
 Transfer-Encoding: chunked
 User-Agent: python-requests/2.3.0 CPython/3.4.1 Linux/3.15.5-gentoo
 Accept: */*
 Content-Type: multipart/related; boundary=996713c691ec4fd5b717ef2740893b78
 24
 --996713c691ec4fd5b717ef2740893b78
 20
 Content-Type: application/json
 2
 68
 {_id: test,_attachments: {foo: {follows: true, content_type: 
 text/plain, length: 14}}}
 24
 --996713c691ec4fd5b717ef2740893b78
 60
 Content-Type: text/plain
 Content-Disposition: attachment;filename=foo
 Content-Length: 12
 e
 Time to Relax!
 26
 --996713c691ec4fd5b717ef2740893b78--
 0
 {code}
 But connection hangs: CouchDB thinks that there have to more data while zero 
 length chunk had been send and doesn't reply with anything back to client 
 which had finished the request and awaits for the response.
 The problem could be fixed by specifying full Content-Length of multipart 
 body in request, which kills all the idea of chunked transfer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (COUCHDB-2280) Default view options in design document

2014-07-28 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-2280.


Resolution: Invalid

Hi,

There are only two view generation options (described at 
https://wiki.apache.org/couchdb/HTTP_view_API#View_Generation_Options) and 
include_docs is not one of them.

The options object in the design document does not affect query time parameters.

 Default view options in design document
 ---

 Key: COUCHDB-2280
 URL: https://issues.apache.org/jira/browse/COUCHDB-2280
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: JavaScript View Server
Reporter: Weixiang Guan

 I have put ??options:{include_docs:true}?? in my design document and in 
 my *map* functions I emitted something like ??{_id:doc._id}??. However when 
 I query the view without explicitly specifying ??include_docs=true?? I do not 
 get the linked documents (the other way around, when I specify 
 ??include_docs=true??, I get the desired results with linked documents). So I 
 think the problem is the ??options?? field in the design document is not 
 working. In the documentation I saw actually, one can put query parameters in 
 ??options?? field to make those default, and I did not see any other 
 requirement to make this work. Please check this.
 BTW, I am using CouchDB 1.5 (1.6 is not working on my computer, don't know 
 why).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2280) Default view options in design document

2014-07-28 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076141#comment-14076141
 ] 

Robert Newson commented on COUCHDB-2280:


Those docs do not seem to correspond to reality. I cannot find a ticket related 
to that enhancement and the codebase indicates that no such feature exists in 
any version. I'll talk with the author of that paragraph for clarification.


 Default view options in design document
 ---

 Key: COUCHDB-2280
 URL: https://issues.apache.org/jira/browse/COUCHDB-2280
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: JavaScript View Server
Reporter: Weixiang Guan

 I have put {code:javascript}options:{include_docs:true}{code} in my 
 design document and in my *map* functions I emitted something like 
 {code:javascript}{_id:doc._id}{code}. However when I query the view without 
 explicitly specifying {quote}include_docs=true{quote} I do not get the linked 
 documents (the other way around, when I specify include_docs=true, I get the 
 desired results with linked documents). So I think the problem is the 
 options field in the design document is not working. In the documentation I 
 saw actually, one can put query parameters in options field to make those 
 default, and I did not see any other requirement to make this work. Please 
 check this.
 BTW, I am using CouchDB 1.5 (1.6 is not working on my computer, don't know 
 why).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (COUCHDB-2280) Default view options in design document

2014-07-28 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-2280.


Resolution: Invalid

I've confirmed that the documentation was wrong, sorry about that!

I've removed the paragraph but it will be a few minutes before the site updates.

 Default view options in design document
 ---

 Key: COUCHDB-2280
 URL: https://issues.apache.org/jira/browse/COUCHDB-2280
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: JavaScript View Server
Reporter: Weixiang Guan

 I have put {code:javascript}options:{include_docs:true}{code} in my 
 design document and in my *map* functions I emitted something like 
 {code:javascript}{_id:doc._id}{code}. However when I query the view without 
 explicitly specifying {quote}include_docs=true{quote} I do not get the linked 
 documents (the other way around, when I specify include_docs=true, I get the 
 desired results with linked documents). So I think the problem is the 
 options field in the design document is not working. In the documentation I 
 saw actually, one can put query parameters in options field to make those 
 default, and I did not see any other requirement to make this work. Please 
 check this.
 BTW, I am using CouchDB 1.5 (1.6 is not working on my computer, don't know 
 why).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-1415) Re-insering a document silently fails after compact is executed

2014-06-30 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14048038#comment-14048038
 ] 

Robert Newson commented on COUCHDB-1415:


I can't say beyond 2014 but people are working on it constantly.

 Re-insering a document silently fails after compact is executed
 ---

 Key: COUCHDB-1415
 URL: https://issues.apache.org/jira/browse/COUCHDB-1415
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.1.1
 Environment: Tested on multiple linux platforms
Reporter: Viktor Szabo
Assignee: Paul Joseph Davis
 Fix For: 2.0.0

 Attachments: patch


 When a document is re-inserted after a compact operation using the same 
 contents it was originally created, the insert operation is silently ignored, 
 leaving the client unaware of the fact it's document is not available in the 
 database.
 Can be reproduced using the following sequence of steps:
 alias curl='curl -H Content-Type: application/json'
 url=http://localhost:5984/database;
 1 curl -X PUT $url
 2 curl -X POST $url -d '{_id: bug, key: value}'
 3 curl -X DELETE $url/bug?rev=1-59414e77c768bc202142ac82c2f129de
 4 curl -X POST $url/_compact
 5 curl -X POST $url -d '{_id: bug, key: value}'
 6 curl -X GET $url/bug
   (bug here)
 1 {ok:true}
   201
 2 [{ok:true,id:bug,rev:1-59414e77c768bc202142ac82c2f129de}]
   201
 3 {ok:true,id:bug,rev:2-9b2e3bcc3752a3a952a3570b2ed4d27e}
   200
 4 {ok:true}
   202
 5 [{ok:true,id:bug,rev:1-59414e77c768bc202142ac82c2f129de}]
   201
 6 {error:not_found,reason:deleted}
   404
 CouchDB shouldn't report ok on step 5 and then go on to claim that the doc 
 is deleted. Also, it seems to work on second try:
 7 curl -X POST $url -d '{_id: bug, key: value}'
 8 curl -X GET $url/bug
 7 {ok:true,id:bug,rev:3-674f864b73df1c80925e48436e21d550}
   201
 8 {_id:bug,_rev:3-674f864b73df1c80925e48436e21d550,key:value}
   200



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-1415) Re-insering a document silently fails after compact is executed

2014-06-27 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14045992#comment-14045992
 ] 

Robert Newson commented on COUCHDB-1415:


It'll certainly be part of 2.0 which is our next release (unless we have to do 
a security-related patch to 1.6.0).


 Re-insering a document silently fails after compact is executed
 ---

 Key: COUCHDB-1415
 URL: https://issues.apache.org/jira/browse/COUCHDB-1415
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.1.1
 Environment: Tested on multiple linux platforms
Reporter: Viktor Szabo
Assignee: Paul Joseph Davis
 Attachments: patch


 When a document is re-inserted after a compact operation using the same 
 contents it was originally created, the insert operation is silently ignored, 
 leaving the client unaware of the fact it's document is not available in the 
 database.
 Can be reproduced using the following sequence of steps:
 alias curl='curl -H Content-Type: application/json'
 url=http://localhost:5984/database;
 1 curl -X PUT $url
 2 curl -X POST $url -d '{_id: bug, key: value}'
 3 curl -X DELETE $url/bug?rev=1-59414e77c768bc202142ac82c2f129de
 4 curl -X POST $url/_compact
 5 curl -X POST $url -d '{_id: bug, key: value}'
 6 curl -X GET $url/bug
   (bug here)
 1 {ok:true}
   201
 2 [{ok:true,id:bug,rev:1-59414e77c768bc202142ac82c2f129de}]
   201
 3 {ok:true,id:bug,rev:2-9b2e3bcc3752a3a952a3570b2ed4d27e}
   200
 4 {ok:true}
   202
 5 [{ok:true,id:bug,rev:1-59414e77c768bc202142ac82c2f129de}]
   201
 6 {error:not_found,reason:deleted}
   404
 CouchDB shouldn't report ok on step 5 and then go on to claim that the doc 
 is deleted. Also, it seems to work on second try:
 7 curl -X POST $url -d '{_id: bug, key: value}'
 8 curl -X GET $url/bug
 7 {ok:true,id:bug,rev:3-674f864b73df1c80925e48436e21d550}
   201
 8 {_id:bug,_rev:3-674f864b73df1c80925e48436e21d550,key:value}
   200



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (COUCHDB-2262) CouchDB 1.6.0 missing Accept in Vary

2014-06-25 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson updated COUCHDB-2262:
---

Priority: Minor  (was: Major)

This one is actually true, we should send the Vary header.

The impact of fixing it is very small, though. Browsers will typically always 
get the text/plain (which they can render) and, if they got the same response 
body but with application/json they'd at most fail to render it (and vice 
versa). Programmatic access would process the response body correctly (since it 
is valid JSON).

I've moved this to minor to reflect that.

 CouchDB 1.6.0 missing Accept in Vary
 

 Key: COUCHDB-2262
 URL: https://issues.apache.org/jira/browse/COUCHDB-2262
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Kaj Nielsen
Priority: Minor

 CouchDB does server-driven content negotiation.
 Meaning that it returns text/html, text/plain or application/json, depending 
 on which context a particular URL was called in (after performing an educated 
 guess using the Accept request header).
 A bit yucky, but necessary at times..
 The bug is that CouchDB does not include Accept in it's Vary response 
 header.
 This means that if you stick a modern HTTP cache in front of CouchDB, then 
 the cache won't know that it has to fetch different content on behalf of 
 clients that send different Accept headers.
 This results in clients getting a cached copy with a content type of whatever 
 the first client that hit the cache happened to prefer, rather than what the 
 client needs.
 Solution: include Vary: Accept, ... in the response headers generated by 
 CouchDB.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2259) CouchDB 1.6.0 returns wrong Vary

2014-06-25 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043186#comment-14043186
 ] 

Robert Newson commented on COUCHDB-2259:



It is true that a given revision of a document does not change but the GET 
request might change from a 200 response to a 404 response, which is why caches 
must revalidate with the origin server.

Be assured that CouchDB developers are very familiar with the HTTP 1.1 
specification and the caching model in particular. We have deliberately chosen 
the current behavior to ensure that correctly written HTTP caches are caching 
correctly. That performance suffers because of this header is also a conscious 
choice, and one we are fully aware of. Cache hits for document lookups are not 
very useful, where this matters more is attachments (if they're large) and view 
queries (we skip the computation entirely if the view has not changed since you 
queried).

Finally, please stop filing duplicates of your issues. If you continue to do so 
I will have your account disabled.

 CouchDB 1.6.0 returns wrong Vary
 

 Key: COUCHDB-2259
 URL: https://issues.apache.org/jira/browse/COUCHDB-2259
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Kaj Nielsen

 CouchDB documents are immutable.  Even when deleted, a new revision is 
 assigned to the document (and the _deleted flag is set).
 As such, HTTP requests for specific document revisions, by use of the 
 If-Match header, are cacheable.
 HTTP requests with no revision numbers are not cacheable.
 Therefore, for If-Match requests, the correct caching headers are:
  - Vary: If-Match
  - no Cache-Control header (or a default value, eg. 24hrs)
 And for non-revisioned requests, the correct headers are:
  - Vary: If-Match
  - Cache-Control: must-revalidate
 However:
  GET /db/doc HTTP/1.1
  Host: localhost:5984
  If-Match: 167-37f82fdbfdc49d38b1c66815deb1e338
  
  HTTP/1.1 200 OK
  Server: CouchDB/1.6.0 (Erlang OTP/R15B01)
  ETag: 167-37f82fdbfdc49d38b1c66815deb1e338
  Date: Tue, 24 Jun 2014 22:34:20 GMT
  Content-Type: text/plain; charset=utf-8
  Content-Length: 649
  Cache-Control: must-revalidate
  
 ...
 As seen above, even when requesting a very specific revision of a document, 
 CouchDB still requests revalidation with must-revalidate.
 Thereby making modern HTTP caches unable to take advantage of caching the 
 (potentially large) body of HTTP transactions that are perfectly cacheable 
 (because the request include If-Match).
 Also, CouchDB incorrectly  does not discern the request from a non-revisioned 
 request with regards to the must-revalidate cache-control header.  
 must-revalidate should not be set on responses where If-Match is present in 
 the request.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2260) CouchDB 1.6.0 returns wrong Cache-Control header

2014-06-25 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043194#comment-14043194
 ] 

Robert Newson commented on COUCHDB-2260:


RFC 7234 governs how HTTP 1.1 should cache and states (at 
https://tools.ietf.org/html/rfc7234#section-4.2.4);

A cache MUST NOT generate a stale response if it is prohibited by an
   explicit in-protocol directive (e.g., by a no-store or no-cache
   cache directive, a must-revalidate cache-response-directive, or an
   applicable s-maxage or proxy-revalidate cache-response-directive;
   see Section 5.2.2).'

If adding a max-age=0 would help in particular cases (e.g, Varnish) then I 
don't see an objection to adding it. I hope you can see that the way you have 
filed tickets has been counterproductive (filing duplicates, addressing 
multiple issues in one ticket, and stating that things are bugs when you don't 
know the background).

I would like to address the substantive issues across the tickets you have 
filed. As I see them, there are two;

1) Send Vary: Accept in order for caches to correctly serve the right 
content-type
2) Add max-age=0 to ensure caches revalidate with the origin as they should

Further, I think you want to allow additional caching, where contacting the 
origin server is omitted. That is tricky as noted elsewhere. We're a database 
and correctness trumps performance every time. There must be cases where we can 
allow that kind of caching without breaking things but it is far more nuanced 
than your currrent understanding, I believe.



 CouchDB 1.6.0 returns wrong Cache-Control header
 

 Key: COUCHDB-2260
 URL: https://issues.apache.org/jira/browse/COUCHDB-2260
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Kaj Nielsen

 CouchDB returns must-revalidate in all responses.
 This does not mean what you think it means.
 Modern HTTP caches will happily return cached copies of must-revalidate 
 items without consulting the backend server.
 According to the RFC:
 
 Section 14.9.4 of HTTP/1.1:
 When the must-revalidate directive is present in a response received by a 
 cache, that cache MUST NOT use the entry after it becomes stale to respond to 
 a subsequent request without first revalidating it with the origin server
 Section 14.8 of HTTP/1.1:
 If the response includes the must-revalidate cache-control directive, the 
 cache MAY use that response in replying to a subsequent request. But if the 
 response is stale, all caches MUST first revalidate it with the origin 
 server...
 
 Meaning that the cache may serve the item without revalidation as long as the 
 response is not stale yet.
 When is the response stale?
 Regarding staleness, see for example RFC5861:
 ==
 A response containing:
  Cache-Control: max-age=600, ...
indicates that it is fresh for 600 seconds
 ==
 Meaning that the content goes stale after the max-age expires.
 Since CouchDB does not set a max-age, the cache may assume that the content 
 does not go stale, and thus that must-revalidate is irrelevant.
 Which is exactly what Varnish does when you stick it in front of CouchDB.
 Correct solution is to set either:
  * Cache-Control: max-age=0, must-revalidate
 Or:
  * Cache-Control: no-cache
 The latter meaning that the cache must also refresh on each request (but is 
 free to use conditional GETs if it wishes) - see RFC.
 See also:
 https://stackoverflow.com/questions/7573466/is-cache-controlmust-revalidate-obliging-to-validate-all-requests-or-just-the



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (COUCHDB-2259) CouchDB 1.6.0 returns wrong Vary

2014-06-25 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson updated COUCHDB-2259:
---

Skill Level: Regular Contributors Level (Easy to Medium)  (was: Guru Level 
(Everyone buy this person a beer at the next conference!))

 CouchDB 1.6.0 returns wrong Vary
 

 Key: COUCHDB-2259
 URL: https://issues.apache.org/jira/browse/COUCHDB-2259
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Kaj Nielsen

 CouchDB documents are immutable.  Even when deleted, a new revision is 
 assigned to the document (and the _deleted flag is set).
 As such, HTTP requests for specific document revisions, by use of the 
 If-Match header, are cacheable.
 HTTP requests with no revision numbers are not cacheable.
 Therefore, for If-Match requests, the correct caching headers are:
  - Vary: If-Match
  - no Cache-Control header (or a default value, eg. 24hrs)
 And for non-revisioned requests, the correct headers are:
  - Vary: If-Match
  - Cache-Control: must-revalidate
 However:
  GET /db/doc HTTP/1.1
  Host: localhost:5984
  If-Match: 167-37f82fdbfdc49d38b1c66815deb1e338
  
  HTTP/1.1 200 OK
  Server: CouchDB/1.6.0 (Erlang OTP/R15B01)
  ETag: 167-37f82fdbfdc49d38b1c66815deb1e338
  Date: Tue, 24 Jun 2014 22:34:20 GMT
  Content-Type: text/plain; charset=utf-8
  Content-Length: 649
  Cache-Control: must-revalidate
  
 ...
 As seen above, even when requesting a very specific revision of a document, 
 CouchDB still requests revalidation with must-revalidate.
 Thereby making modern HTTP caches unable to take advantage of caching the 
 (potentially large) body of HTTP transactions that are perfectly cacheable 
 (because the request include If-Match).
 Also, CouchDB incorrectly  does not discern the request from a non-revisioned 
 request with regards to the must-revalidate cache-control header.  
 must-revalidate should not be set on responses where If-Match is present in 
 the request.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2260) CouchDB 1.6.0 returns wrong Cache-Control header

2014-06-25 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043207#comment-14043207
 ] 

Robert Newson commented on COUCHDB-2260:


I suggest moving this to the couchdb developer mailing list, JIRA is not the 
place for a discussion as broad as this.

 CouchDB 1.6.0 returns wrong Cache-Control header
 

 Key: COUCHDB-2260
 URL: https://issues.apache.org/jira/browse/COUCHDB-2260
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Kaj Nielsen

 CouchDB returns must-revalidate in all responses.
 This does not mean what you think it means.
 Modern HTTP caches will happily return cached copies of must-revalidate 
 items without consulting the backend server.
 According to the RFC:
 
 Section 14.9.4 of HTTP/1.1:
 When the must-revalidate directive is present in a response received by a 
 cache, that cache MUST NOT use the entry after it becomes stale to respond to 
 a subsequent request without first revalidating it with the origin server
 Section 14.8 of HTTP/1.1:
 If the response includes the must-revalidate cache-control directive, the 
 cache MAY use that response in replying to a subsequent request. But if the 
 response is stale, all caches MUST first revalidate it with the origin 
 server...
 
 Meaning that the cache may serve the item without revalidation as long as the 
 response is not stale yet.
 When is the response stale?
 Regarding staleness, see for example RFC5861:
 ==
 A response containing:
  Cache-Control: max-age=600, ...
indicates that it is fresh for 600 seconds
 ==
 Meaning that the content goes stale after the max-age expires.
 Since CouchDB does not set a max-age, the cache may assume that the content 
 does not go stale, and thus that must-revalidate is irrelevant.
 Which is exactly what Varnish does when you stick it in front of CouchDB.
 Correct solution is to set either:
  * Cache-Control: max-age=0, must-revalidate
 Or:
  * Cache-Control: no-cache
 The latter meaning that the cache must also refresh on each request (but is 
 free to use conditional GETs if it wishes) - see RFC.
 See also:
 https://stackoverflow.com/questions/7573466/is-cache-controlmust-revalidate-obliging-to-validate-all-requests-or-just-the



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (COUCHDB-2258) CouchDB 1.6.0 returns wrong Content-Type

2014-06-24 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-2258.


Resolution: Not a Problem

CouchDB has always returned text/plain when it believes the caller is a browser 
rather than an application as a pragmatic choice.

use an Accept: application/json request header to get your desired result.


 CouchDB 1.6.0 returns wrong Content-Type
 

 Key: COUCHDB-2258
 URL: https://issues.apache.org/jira/browse/COUCHDB-2258
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Kaj Nielsen

 CouchDB returns JSON documents.
 So the Content-Type set by CouchDB should be application/json.
 Instead:
  GET /db/doc HTTP/1.1
  Host: localhost:5984
 
  HTTP/1.1 200 OK
  Server: CouchDB/1.6.0 (Erlang OTP/R15B01)
  ETag: 167-37f82fdbfdc49d38b1c66815deb1e338
  Date: Tue, 24 Jun 2014 22:26:39 GMT
  Content-Type: text/plain; charset=utf-8
  Content-Length: 649
  Cache-Control: must-revalidate
 
 ...
 As seen above, CouchDB returns text/plain.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2259) CouchDB 1.6.0 returns wrong Vary

2014-06-24 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042806#comment-14042806
 ] 

Robert Newson commented on COUCHDB-2259:


couchdb documents are cacheable but it's important to revalidate with the host 
server (in case it's changed). This is still caching, the body can be served 
from the cache if the host server returns a 304 response.

 CouchDB 1.6.0 returns wrong Vary
 

 Key: COUCHDB-2259
 URL: https://issues.apache.org/jira/browse/COUCHDB-2259
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Kaj Nielsen

 CouchDB documents are immutable.  Even when deleted, a new revision is 
 assigned to the document (and the _deleted flag is set).
 As such, HTTP requests for specific document revisions, by use of the 
 If-Match header, are cacheable.
 HTTP requests with no revision numbers are not cacheable.
 Therefore, for If-Match requests, the correct caching headers are:
  - Vary: If-Match
  - no Cache-Control header (or a default value, eg. 24hrs)
 And for non-revisioned requests, the correct headers are:
  - Vary: If-Match
  - Cache-Control: must-revalidate
 However:
  GET /db/doc HTTP/1.1
  Host: localhost:5984
  If-Match: 167-37f82fdbfdc49d38b1c66815deb1e338
  
  HTTP/1.1 200 OK
  Server: CouchDB/1.6.0 (Erlang OTP/R15B01)
  ETag: 167-37f82fdbfdc49d38b1c66815deb1e338
  Date: Tue, 24 Jun 2014 22:34:20 GMT
  Content-Type: text/plain; charset=utf-8
  Content-Length: 649
  Cache-Control: must-revalidate
  
 ...
 As seen above, even when requesting a very specific revision of a document, 
 CouchDB still requests revalidation with must-revalidate.
 Thereby making modern HTTP caches unable to take advantage of caching the 
 (potentially large) body of HTTP transactions that are perfectly cacheable 
 (because the request include If-Match).
 Also, CouchDB incorrectly  does not discern the request from a non-revisioned 
 request with regards to the must-revalidate cache-control header.  
 must-revalidate should not be set on responses where If-Match is present in 
 the request.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (COUCHDB-2259) CouchDB 1.6.0 returns wrong Vary

2014-06-24 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-2259.


Resolution: Not a Problem

 CouchDB 1.6.0 returns wrong Vary
 

 Key: COUCHDB-2259
 URL: https://issues.apache.org/jira/browse/COUCHDB-2259
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Kaj Nielsen

 CouchDB documents are immutable.  Even when deleted, a new revision is 
 assigned to the document (and the _deleted flag is set).
 As such, HTTP requests for specific document revisions, by use of the 
 If-Match header, are cacheable.
 HTTP requests with no revision numbers are not cacheable.
 Therefore, for If-Match requests, the correct caching headers are:
  - Vary: If-Match
  - no Cache-Control header (or a default value, eg. 24hrs)
 And for non-revisioned requests, the correct headers are:
  - Vary: If-Match
  - Cache-Control: must-revalidate
 However:
  GET /db/doc HTTP/1.1
  Host: localhost:5984
  If-Match: 167-37f82fdbfdc49d38b1c66815deb1e338
  
  HTTP/1.1 200 OK
  Server: CouchDB/1.6.0 (Erlang OTP/R15B01)
  ETag: 167-37f82fdbfdc49d38b1c66815deb1e338
  Date: Tue, 24 Jun 2014 22:34:20 GMT
  Content-Type: text/plain; charset=utf-8
  Content-Length: 649
  Cache-Control: must-revalidate
  
 ...
 As seen above, even when requesting a very specific revision of a document, 
 CouchDB still requests revalidation with must-revalidate.
 Thereby making modern HTTP caches unable to take advantage of caching the 
 (potentially large) body of HTTP transactions that are perfectly cacheable 
 (because the request include If-Match).
 Also, CouchDB incorrectly  does not discern the request from a non-revisioned 
 request with regards to the must-revalidate cache-control header.  
 must-revalidate should not be set on responses where If-Match is present in 
 the request.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2259) CouchDB 1.6.0 returns wrong Vary

2014-06-24 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042810#comment-14042810
 ] 

Robert Newson commented on COUCHDB-2259:


no Cache-Control header (or a default value, eg. 24hrs)

this is certainly not correct in any absolute sense, it's merely your 
preference. Returning such a result violates our role as a database.

 CouchDB 1.6.0 returns wrong Vary
 

 Key: COUCHDB-2259
 URL: https://issues.apache.org/jira/browse/COUCHDB-2259
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Kaj Nielsen

 CouchDB documents are immutable.  Even when deleted, a new revision is 
 assigned to the document (and the _deleted flag is set).
 As such, HTTP requests for specific document revisions, by use of the 
 If-Match header, are cacheable.
 HTTP requests with no revision numbers are not cacheable.
 Therefore, for If-Match requests, the correct caching headers are:
  - Vary: If-Match
  - no Cache-Control header (or a default value, eg. 24hrs)
 And for non-revisioned requests, the correct headers are:
  - Vary: If-Match
  - Cache-Control: must-revalidate
 However:
  GET /db/doc HTTP/1.1
  Host: localhost:5984
  If-Match: 167-37f82fdbfdc49d38b1c66815deb1e338
  
  HTTP/1.1 200 OK
  Server: CouchDB/1.6.0 (Erlang OTP/R15B01)
  ETag: 167-37f82fdbfdc49d38b1c66815deb1e338
  Date: Tue, 24 Jun 2014 22:34:20 GMT
  Content-Type: text/plain; charset=utf-8
  Content-Length: 649
  Cache-Control: must-revalidate
  
 ...
 As seen above, even when requesting a very specific revision of a document, 
 CouchDB still requests revalidation with must-revalidate.
 Thereby making modern HTTP caches unable to take advantage of caching the 
 (potentially large) body of HTTP transactions that are perfectly cacheable 
 (because the request include If-Match).
 Also, CouchDB incorrectly  does not discern the request from a non-revisioned 
 request with regards to the must-revalidate cache-control header.  
 must-revalidate should not be set on responses where If-Match is present in 
 the request.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (COUCHDB-2256) Implement since=now in _changes

2014-06-16 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-2256.


Resolution: Fixed

 Implement since=now in _changes
 ---

 Key: COUCHDB-2256
 URL: https://issues.apache.org/jira/browse/COUCHDB-2256
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
Reporter: Nolan Lawson

 Was informed by [~rnewson] that this hasn't been ported to BigCouch yet. 
 Testing with Cloudant, it gives me a {{badargs}} error. In CouchDB 1.5.0, 
 {{_changes?since=now}} works fine.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-1779) Support of HTTP PATCH method to upload/update attachments in chunks

2014-06-07 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14020787#comment-14020787
 ] 

Robert Newson commented on COUCHDB-1779:


From http://tools.ietf.org/html/rfc7231#appendix-B

Servers are no longer required to handle all Content-* header fields
   and use of Content-Range has been explicitly banned in PUT requests.
   (Section 4.3.4)

 Support of HTTP PATCH method to upload/update attachments in chunks
 ---

 Key: COUCHDB-1779
 URL: https://issues.apache.org/jira/browse/COUCHDB-1779
 Project: CouchDB
  Issue Type: New Feature
  Components: Database Core, HTTP Interface
Reporter: Sebastian Podjasek
Priority: Minor

 I'm wondering would it be possible to implement PATCH methods for document 
 attachments.
 I'm currently facing a theoretical problem to upload large files over GSM 
 network, my storage back-end is CouchDB with our own API served in front-end. 
 I was thinking about few other solutions, but all of them involves some post 
 processing of document after receiving last chunk, it would be great to just 
 invoke this:
PATCH 
 /database/0519690fc465fc0e9cc0f89fa87973fc/bigfile.dat?_rev=13-c969cb72e5428ca2ccdd191e0cd7bf4b
  HTTP/1.1
Content-Type: application/octet-string
Range: bytes=1200-1300
Content-Length: 100
 What do you think about this. Is it theoretically possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2248) Replace master and slave terminology

2014-05-28 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14010935#comment-14010935
 ] 

Robert Newson commented on COUCHDB-2248:


Noah, I ask you now to stop. You've said your piece, the topic has been 
debated. Please don't keep this topic alive until you win. I know that you 
don't want our community direction to be decided by only those players with the 
energy to keep fighting. The topic has been discussed and reasonable people 
have objected. You've twice characterised their objections unfairly and I ask 
you to ponder that. Please close the the ticket.

 Replace master and slave terminology
 

 Key: COUCHDB-2248
 URL: https://issues.apache.org/jira/browse/COUCHDB-2248
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Documentation
Reporter: Noah Slater
Priority: Trivial

 Inspired by the comments on this PR:
 https://github.com/django/django/pull/2692
 Summary is: `master` and `slave` are racially charged terms, and it would be 
 good to avoid them. Django have gone for `primary` and `replica`. But we also 
 have to deal with what we now call multi-master setups. I propose peer to 
 peer as a replacement, or just peer if you're describing one node.
 As far as I can tell, the primary work here is the docs. The wiki and any 
 supporting material can be updated after.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2248) Replace master and slave terminology

2014-05-28 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14011260#comment-14011260
 ] 

Robert Newson commented on COUCHDB-2248:


because it is discriminatory and offensive speech.

This is your opinion but you state it like a fact. Multiple voices have 
expressed that they do not agree with you. You respond by dismissing their 
right to express that opinion.

This thread will never end but I will try again.

To update the documentation, please just show an actual diff and we can discuss 
or vote on it.

If you intend, by this ticket, to prohibit us all from using the term 
master/slave when referring to databases replication in the sense of article 
https://en.wikipedia.org/wiki/Master-slave_(technology), then please accept 
that it's not happening.


 Replace master and slave terminology
 

 Key: COUCHDB-2248
 URL: https://issues.apache.org/jira/browse/COUCHDB-2248
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Documentation
Reporter: Noah Slater
Priority: Trivial

 Inspired by the comments on this PR:
 https://github.com/django/django/pull/2692
 Summary is: `master` and `slave` are racially charged terms, and it would be 
 good to avoid them. Django have gone for `primary` and `replica`. But we also 
 have to deal with what we now call multi-master setups. I propose peer to 
 peer as a replacement, or just peer if you're describing one node.
 As far as I can tell, the primary work here is the docs. The wiki and any 
 supporting material can be updated after.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2248) Replace master and slave terminology

2014-05-28 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14011293#comment-14011293
 ] 

Robert Newson commented on COUCHDB-2248:


To the proposal: You could use multi-master, single-master, partitioning, 
sharding, write-through caches, and all sorts of other complex techniques.

+1.

Amen.


 Replace master and slave terminology
 

 Key: COUCHDB-2248
 URL: https://issues.apache.org/jira/browse/COUCHDB-2248
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Documentation
Reporter: Noah Slater
Priority: Trivial

 Inspired by the comments on this PR:
 https://github.com/django/django/pull/2692
 Summary is: `master` and `slave` are racially charged terms, and it would be 
 good to avoid them. Django have gone for `primary` and `replica`. But we also 
 have to deal with what we now call multi-master setups. I propose peer to 
 peer as a replacement, or just peer if you're describing one node.
 As far as I can tell, the primary work here is the docs. The wiki and any 
 supporting material can be updated after.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2248) Replace master and slave terminology

2014-05-27 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009725#comment-14009725
 ] 

Robert Newson commented on COUCHDB-2248:


Please retract your false accusation of feigned defeatism and debate this 
with a bit more respect.


 Replace master and slave terminology
 

 Key: COUCHDB-2248
 URL: https://issues.apache.org/jira/browse/COUCHDB-2248
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Documentation
Reporter: Noah Slater
Priority: Trivial

 Inspired by the comments on this PR:
 https://github.com/django/django/pull/2692
 Summary is: `master` and `slave` are racially charged terms, and it would be 
 good to avoid them. Django have gone for `primary` and `replica`. But we also 
 have to deal with what we now call multi-master setups. I propose peer to 
 peer as a replacement, or just peer if you're describing one node.
 As far as I can tell, the primary work here is the docs. The wiki and any 
 supporting material can be updated after.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2248) Replace master and slave terminology

2014-05-27 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009785#comment-14009785
 ] 

Robert Newson commented on COUCHDB-2248:


multi-master is my preference over master-master too since ~forever.


 Replace master and slave terminology
 

 Key: COUCHDB-2248
 URL: https://issues.apache.org/jira/browse/COUCHDB-2248
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Documentation
Reporter: Noah Slater
Priority: Trivial

 Inspired by the comments on this PR:
 https://github.com/django/django/pull/2692
 Summary is: `master` and `slave` are racially charged terms, and it would be 
 good to avoid them. Django have gone for `primary` and `replica`. But we also 
 have to deal with what we now call multi-master setups. I propose peer to 
 peer as a replacement, or just peer if you're describing one node.
 As far as I can tell, the primary work here is the docs. The wiki and any 
 supporting material can be updated after.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2248) Replace master and slave terminology

2014-05-27 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14010062#comment-14010062
 ] 

Robert Newson commented on COUCHDB-2248:


replica does not mean slave, and, as previously mentioned, and just now 
mentioned again, master replica and slave replica are valid (if redundant) 
ways to express these terms.

in the sentence in question, master/slave is the simplest and most 
descriptive term to use to describe that couchdb can be used in a master/slave 
setup. The meaning is plain. To Alex's point about backup, I believe that is 
the intended functionality of a slave database in a master/slave setup, it 
serves as a backup of the master. One fails over to the backup if the master 
fails. If you like, and to break this deadlock, I'm +1 on primary/secondary 
but I remain -1 on changing away from the straightforward use of master/slave 
to something that is less clear (which I feel replica is).

 Replace master and slave terminology
 

 Key: COUCHDB-2248
 URL: https://issues.apache.org/jira/browse/COUCHDB-2248
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Documentation
Reporter: Noah Slater
Priority: Trivial

 Inspired by the comments on this PR:
 https://github.com/django/django/pull/2692
 Summary is: `master` and `slave` are racially charged terms, and it would be 
 good to avoid them. Django have gone for `primary` and `replica`. But we also 
 have to deal with what we now call multi-master setups. I propose peer to 
 peer as a replacement, or just peer if you're describing one node.
 As far as I can tell, the primary work here is the docs. The wiki and any 
 supporting material can be updated after.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2248) Replace master and slave terminology

2014-05-27 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14010474#comment-14010474
 ] 

Robert Newson commented on COUCHDB-2248:


I suggest we leave the one reference to master/slave as a database topology 
as it is. Recommend closing as 'not a problem'. The issue has been raised and 
debated, no one can argue that we intend these words pejoratively.

 Replace master and slave terminology
 

 Key: COUCHDB-2248
 URL: https://issues.apache.org/jira/browse/COUCHDB-2248
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Documentation
Reporter: Noah Slater
Priority: Trivial

 Inspired by the comments on this PR:
 https://github.com/django/django/pull/2692
 Summary is: `master` and `slave` are racially charged terms, and it would be 
 good to avoid them. Django have gone for `primary` and `replica`. But we also 
 have to deal with what we now call multi-master setups. I propose peer to 
 peer as a replacement, or just peer if you're describing one node.
 As far as I can tell, the primary work here is the docs. The wiki and any 
 supporting material can be updated after.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2248) Replace master and slave terminology

2014-05-26 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009143#comment-14009143
 ] 

Robert Newson commented on COUCHDB-2248:


-1. For one thing, your claim that master/slave terminology is racially charged 
is itself racially charged (there is such a thing as white slavery and the 
terms also apply in BDSM).

These are terms of art in our application domain, it will be difficult to 
discuss the particulars of our database without using them. As Joan notes, we 
can't fully purge the term master anyway.

I am +1 on saying peer where it reads well. master/slave is just one 
configuration of couchdb and among the least interesting.


 Replace master and slave terminology
 

 Key: COUCHDB-2248
 URL: https://issues.apache.org/jira/browse/COUCHDB-2248
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Documentation
Reporter: Noah Slater
Priority: Trivial

 Inspired by the comments on this PR:
 https://github.com/django/django/pull/2692
 Summary is: `master` and `slave` are racially charged terms, and it would be 
 good to avoid them. Django have gone for `primary` and `replica`. But we also 
 have to deal with what we now call multi-master setups. I propose peer to 
 peer as a replacement, or just peer if you're describing one node.
 As far as I can tell, the primary work here is the docs. The wiki and any 
 supporting material can be updated after.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2248) Replace master and slave terminology

2014-05-26 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009146#comment-14009146
 ] 

Robert Newson commented on COUCHDB-2248:


Meaning you can do as you please? Why open a ticket at all then?

 Replace master and slave terminology
 

 Key: COUCHDB-2248
 URL: https://issues.apache.org/jira/browse/COUCHDB-2248
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Documentation
Reporter: Noah Slater
Priority: Trivial

 Inspired by the comments on this PR:
 https://github.com/django/django/pull/2692
 Summary is: `master` and `slave` are racially charged terms, and it would be 
 good to avoid them. Django have gone for `primary` and `replica`. But we also 
 have to deal with what we now call multi-master setups. I propose peer to 
 peer as a replacement, or just peer if you're describing one node.
 As far as I can tell, the primary work here is the docs. The wiki and any 
 supporting material can be updated after.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2248) Replace master and slave terminology

2014-05-26 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009164#comment-14009164
 ] 

Robert Newson commented on COUCHDB-2248:


The topic aside, the change proposed would be a commit to our source code repo 
and would change the contents of our release artifact, I'm confident the ASF 
rules intend for technically justified vetos to hold.

 Replace master and slave terminology
 

 Key: COUCHDB-2248
 URL: https://issues.apache.org/jira/browse/COUCHDB-2248
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Documentation
Reporter: Noah Slater
Priority: Trivial

 Inspired by the comments on this PR:
 https://github.com/django/django/pull/2692
 Summary is: `master` and `slave` are racially charged terms, and it would be 
 good to avoid them. Django have gone for `primary` and `replica`. But we also 
 have to deal with what we now call multi-master setups. I propose peer to 
 peer as a replacement, or just peer if you're describing one node.
 As far as I can tell, the primary work here is the docs. The wiki and any 
 supporting material can be updated after.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2248) Replace master and slave terminology

2014-05-26 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009165#comment-14009165
 ] 

Robert Newson commented on COUCHDB-2248:


Considering how icky these terms make people feel -- You have *not* made this 
case at all.


 Replace master and slave terminology
 

 Key: COUCHDB-2248
 URL: https://issues.apache.org/jira/browse/COUCHDB-2248
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Documentation
Reporter: Noah Slater
Priority: Trivial

 Inspired by the comments on this PR:
 https://github.com/django/django/pull/2692
 Summary is: `master` and `slave` are racially charged terms, and it would be 
 good to avoid them. Django have gone for `primary` and `replica`. But we also 
 have to deal with what we now call multi-master setups. I propose peer to 
 peer as a replacement, or just peer if you're describing one node.
 As far as I can tell, the primary work here is the docs. The wiki and any 
 supporting material can be updated after.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2248) Replace master and slave terminology

2014-05-26 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009173#comment-14009173
 ] 

Robert Newson commented on COUCHDB-2248:


Let's move to review then, let's see your proposed text changes in context. 
Context is everything.

Aside: I certainly want vetoes to include documentation and I want our bylaws 
to be clear on that. Documentation is important and is often coupled to code, 
it would be odd indeed for it to be possible to veto a code change but not the 
documentation that describes it.


 Replace master and slave terminology
 

 Key: COUCHDB-2248
 URL: https://issues.apache.org/jira/browse/COUCHDB-2248
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Documentation
Reporter: Noah Slater
Priority: Trivial

 Inspired by the comments on this PR:
 https://github.com/django/django/pull/2692
 Summary is: `master` and `slave` are racially charged terms, and it would be 
 good to avoid them. Django have gone for `primary` and `replica`. But we also 
 have to deal with what we now call multi-master setups. I propose peer to 
 peer as a replacement, or just peer if you're describing one node.
 As far as I can tell, the primary work here is the docs. The wiki and any 
 supporting material can be updated after.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2248) Replace master and slave terminology

2014-05-26 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009177#comment-14009177
 ] 

Robert Newson commented on COUCHDB-2248:


multi-master seems a good replacement. That term is well known (and something 
of a holy grail in other database systems...).

Agree that peer-to-peer is not quite right in this context.

 Replace master and slave terminology
 

 Key: COUCHDB-2248
 URL: https://issues.apache.org/jira/browse/COUCHDB-2248
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
  Components: Documentation
Reporter: Noah Slater
Priority: Trivial

 Inspired by the comments on this PR:
 https://github.com/django/django/pull/2692
 Summary is: `master` and `slave` are racially charged terms, and it would be 
 good to avoid them. Django have gone for `primary` and `replica`. But we also 
 have to deal with what we now call multi-master setups. I propose peer to 
 peer as a replacement, or just peer if you're describing one node.
 As far as I can tell, the primary work here is the docs. The wiki and any 
 supporting material can be updated after.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2246) Make OS daemons configuration key available

2014-05-20 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003184#comment-14003184
 ] 

Robert Newson commented on COUCHDB-2246:


we'd need a special prefix or something, otherwise this idea is a privilege 
escalation vector.

 Make OS daemons configuration key available
 ---

 Key: COUCHDB-2246
 URL: https://issues.apache.org/jira/browse/COUCHDB-2246
 Project: CouchDB
  Issue Type: Improvement
  Security Level: public(Regular issues) 
Reporter: Johannes J. Schmidt

 When registering an os_daemon
 {code}
 [os_daemons]
 my_daemon = /usr/bin/command
 {code}
 there is no possibility to access the key *my_daemon*.
 I would like to manage daemon configuration under the specific key, which 
 enables having different configurations for multiple instances of a single 
 daemon script:
 {code}
 [os_daemons]
 slow_daemon = /usr/bin/command
 fast_daemon = /usr/bin/command
 ; settings for instance slow_daemon of /usr/bin/command
 [slow_daemon]
 dbs: _users,_replicator
 speed: 1
 ; settings for instance fast_daemon of /usr/bin/command
 [fast_daemon]
 dbs: projects,clients
 speed: 10
 {code}
 I think this could be done either via a new command, eg
 {code}
 [key]\n
 {code}
 or via environment variable, without breaking backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2242) [REGRESSION?] List throws error in CouchDB 1.6.0rc5, worked in 1.5.0

2014-05-19 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14001511#comment-14001511
 ] 

Robert Newson commented on COUCHDB-2242:


ah, nvm, JIRA was hiding it. thanks for the report.

 [REGRESSION?] List throws error in CouchDB 1.6.0rc5, worked in 1.5.0
 

 Key: COUCHDB-2242
 URL: https://issues.apache.org/jira/browse/COUCHDB-2242
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
Reporter: Christian Lins

 A list function (see below) returns the following strangely formatted error 
 message:
 {noformat}
 {error:EXIT,reason:{{badmatch, {error,\n {enoent,\n
  [{erlang,open_port,\n  [{spawn,\n   
 \c:/Program Files (x86)/Apache Software 
 Foundation/CouchDB/lib/couch-1.6.0/priv/couchspawnkillable ./couchjs.exe 
 ../share/couchdb/server/main.js\},\n   
 [stream,{line,4096},binary,exit_status,hide]],\n  []},\n  
 {couch_os_process,init,1,\n  [{file,\n

 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_os_process.erl\},\n 
   {line,148}]},\n  
 {gen_server,init_it,6,[{file,\gen_server.erl\},{line,306}]},\n  
 {proc_lib,init_p_do_apply,3,\n  
 [{file,\proc_lib.erl\},{line,239}]}]}}},\n 
 [{couch_query_servers,new_process,3,\n  [{file,\n   
 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,477}]},\n  {couch_query_servers,lang_proc,3,\n  [{file,\n

 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,462}]},\n  {couch_query_servers,handle_call,3,\n  [{file,\n  
  
 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,322}]},\n  
 {gen_server,handle_msg,5,[{file,\gen_server.erl\},{line,580}]},\n  
 {proc_lib,init_p_do_apply,3,[{file,\proc_lib.erl\},{line,239}]}]}}
 {noformat}
 It worked well with CouchDB 1.5.0, now after upgrade to 1.6.0-rc5 it does not.
 The design document with the functions:
 {noformat}
 {
_id: _design/report,
_rev: 1-604210c4db6dbc7ee800bef7f6cce5ac,
language: javascript,
version: 1,
views: {
activities-per-month: {
map: function(doc){var 
 type=doc._id.substr(0,doc._id.indexOf(':'));if(type==='de.bremer-heimstiftung.vera.activity'){var
  date=new 
 Date(parseInt(doc.activity.date.start,10));emit([1900+date.getYear(),date.getMonth()+1],doc)}}
}
},
lists: {
sum-up-keys: function(head,req){var row;var 
 cnt={};while(row=getRow()){if(cnt[row.key]===undefined){cnt[row.key]=1}else{cnt[row.key]+=1}}send(JSON.stringify(cnt))}
}
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2242) [REGRESSION?] List throws error in CouchDB 1.6.0rc5, worked in 1.5.0

2014-05-19 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14001510#comment-14001510
 ] 

Robert Newson commented on COUCHDB-2242:


can you add the full error output please?

 [REGRESSION?] List throws error in CouchDB 1.6.0rc5, worked in 1.5.0
 

 Key: COUCHDB-2242
 URL: https://issues.apache.org/jira/browse/COUCHDB-2242
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
Reporter: Christian Lins

 A list function (see below) returns the following strangely formatted error 
 message:
 {noformat}
 {error:EXIT,reason:{{badmatch, {error,\n {enoent,\n
  [{erlang,open_port,\n  [{spawn,\n   
 \c:/Program Files (x86)/Apache Software 
 Foundation/CouchDB/lib/couch-1.6.0/priv/couchspawnkillable ./couchjs.exe 
 ../share/couchdb/server/main.js\},\n   
 [stream,{line,4096},binary,exit_status,hide]],\n  []},\n  
 {couch_os_process,init,1,\n  [{file,\n

 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_os_process.erl\},\n 
   {line,148}]},\n  
 {gen_server,init_it,6,[{file,\gen_server.erl\},{line,306}]},\n  
 {proc_lib,init_p_do_apply,3,\n  
 [{file,\proc_lib.erl\},{line,239}]}]}}},\n 
 [{couch_query_servers,new_process,3,\n  [{file,\n   
 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,477}]},\n  {couch_query_servers,lang_proc,3,\n  [{file,\n

 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,462}]},\n  {couch_query_servers,handle_call,3,\n  [{file,\n  
  
 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,322}]},\n  
 {gen_server,handle_msg,5,[{file,\gen_server.erl\},{line,580}]},\n  
 {proc_lib,init_p_do_apply,3,[{file,\proc_lib.erl\},{line,239}]}]}}
 {noformat}
 It worked well with CouchDB 1.5.0, now after upgrade to 1.6.0-rc5 it does not.
 The design document with the functions:
 {noformat}
 {
_id: _design/report,
_rev: 1-604210c4db6dbc7ee800bef7f6cce5ac,
language: javascript,
version: 1,
views: {
activities-per-month: {
map: function(doc){var 
 type=doc._id.substr(0,doc._id.indexOf(':'));if(type==='de.bremer-heimstiftung.vera.activity'){var
  date=new 
 Date(parseInt(doc.activity.date.start,10));emit([1900+date.getYear(),date.getMonth()+1],doc)}}
}
},
lists: {
sum-up-keys: function(head,req){var row;var 
 cnt={};while(row=getRow()){if(cnt[row.key]===undefined){cnt[row.key]=1}else{cnt[row.key]+=1}}send(JSON.stringify(cnt))}
}
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2242) [REGRESSION?] List throws error in CouchDB 1.6.0rc5, worked in 1.5.0

2014-05-19 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14001514#comment-14001514
 ] 

Robert Newson commented on COUCHDB-2242:


This looks like a problem with your install. Your config is pointing to 
c:/Program Files (x86)/Apache Software 
Foundation/CouchDB/lib/couch-1.6.0/priv/couchspawnkillable ./couchjs.exe 
../share/couchdb/server/main.js and it doesn't exist. This is either a mistake 
when you upgraded or it's the spaces in the path components.


 [REGRESSION?] List throws error in CouchDB 1.6.0rc5, worked in 1.5.0
 

 Key: COUCHDB-2242
 URL: https://issues.apache.org/jira/browse/COUCHDB-2242
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
Reporter: Christian Lins

 A list function (see below) returns the following strangely formatted error 
 message:
 {noformat}
 {error:EXIT,reason:{{badmatch, {error,\n {enoent,\n
  [{erlang,open_port,\n  [{spawn,\n   
 \c:/Program Files (x86)/Apache Software 
 Foundation/CouchDB/lib/couch-1.6.0/priv/couchspawnkillable ./couchjs.exe 
 ../share/couchdb/server/main.js\},\n   
 [stream,{line,4096},binary,exit_status,hide]],\n  []},\n  
 {couch_os_process,init,1,\n  [{file,\n

 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_os_process.erl\},\n 
   {line,148}]},\n  
 {gen_server,init_it,6,[{file,\gen_server.erl\},{line,306}]},\n  
 {proc_lib,init_p_do_apply,3,\n  
 [{file,\proc_lib.erl\},{line,239}]}]}}},\n 
 [{couch_query_servers,new_process,3,\n  [{file,\n   
 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,477}]},\n  {couch_query_servers,lang_proc,3,\n  [{file,\n

 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,462}]},\n  {couch_query_servers,handle_call,3,\n  [{file,\n  
  
 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,322}]},\n  
 {gen_server,handle_msg,5,[{file,\gen_server.erl\},{line,580}]},\n  
 {proc_lib,init_p_do_apply,3,[{file,\proc_lib.erl\},{line,239}]}]}}
 {noformat}
 It worked well with CouchDB 1.5.0, now after upgrade to 1.6.0-rc5 it does not.
 The design document with the functions:
 {noformat}
 {
_id: _design/report,
_rev: 1-604210c4db6dbc7ee800bef7f6cce5ac,
language: javascript,
version: 1,
views: {
activities-per-month: {
map: function(doc){var 
 type=doc._id.substr(0,doc._id.indexOf(':'));if(type==='de.bremer-heimstiftung.vera.activity'){var
  date=new 
 Date(parseInt(doc.activity.date.start,10));emit([1900+date.getYear(),date.getMonth()+1],doc)}}
}
},
lists: {
sum-up-keys: function(head,req){var row;var 
 cnt={};while(row=getRow()){if(cnt[row.key]===undefined){cnt[row.key]=1}else{cnt[row.key]+=1}}send(JSON.stringify(cnt))}
}
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (COUCHDB-2242) [REGRESSION?] List throws error in CouchDB 1.6.0rc5, worked in 1.5.0

2014-05-19 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-2242.


Resolution: Invalid

Marked invalid for now (housekeeping), please reopen if the problem persists 
after a clean reinstall to a path without spaces.

 [REGRESSION?] List throws error in CouchDB 1.6.0rc5, worked in 1.5.0
 

 Key: COUCHDB-2242
 URL: https://issues.apache.org/jira/browse/COUCHDB-2242
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
Reporter: Christian Lins

 A list function (see below) returns the following strangely formatted error 
 message:
 {noformat}
 {error:EXIT,reason:{{badmatch, {error,\n {enoent,\n
  [{erlang,open_port,\n  [{spawn,\n   
 \c:/Program Files (x86)/Apache Software 
 Foundation/CouchDB/lib/couch-1.6.0/priv/couchspawnkillable ./couchjs.exe 
 ../share/couchdb/server/main.js\},\n   
 [stream,{line,4096},binary,exit_status,hide]],\n  []},\n  
 {couch_os_process,init,1,\n  [{file,\n

 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_os_process.erl\},\n 
   {line,148}]},\n  
 {gen_server,init_it,6,[{file,\gen_server.erl\},{line,306}]},\n  
 {proc_lib,init_p_do_apply,3,\n  
 [{file,\proc_lib.erl\},{line,239}]}]}}},\n 
 [{couch_query_servers,new_process,3,\n  [{file,\n   
 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,477}]},\n  {couch_query_servers,lang_proc,3,\n  [{file,\n

 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,462}]},\n  {couch_query_servers,handle_call,3,\n  [{file,\n  
  
 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,322}]},\n  
 {gen_server,handle_msg,5,[{file,\gen_server.erl\},{line,580}]},\n  
 {proc_lib,init_p_do_apply,3,[{file,\proc_lib.erl\},{line,239}]}]}}
 {noformat}
 It worked well with CouchDB 1.5.0, now after upgrade to 1.6.0-rc5 it does not.
 The design document with the functions:
 {noformat}
 {
_id: _design/report,
_rev: 1-604210c4db6dbc7ee800bef7f6cce5ac,
language: javascript,
version: 1,
views: {
activities-per-month: {
map: function(doc){var 
 type=doc._id.substr(0,doc._id.indexOf(':'));if(type==='de.bremer-heimstiftung.vera.activity'){var
  date=new 
 Date(parseInt(doc.activity.date.start,10));emit([1900+date.getYear(),date.getMonth()+1],doc)}}
}
},
lists: {
sum-up-keys: function(head,req){var row;var 
 cnt={};while(row=getRow()){if(cnt[row.key]===undefined){cnt[row.key]=1}else{cnt[row.key]+=1}}send(JSON.stringify(cnt))}
}
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2242) [REGRESSION?] List throws error in CouchDB 1.6.0rc5, worked in 1.5.0

2014-05-19 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14001526#comment-14001526
 ] 

Robert Newson commented on COUCHDB-2242:


The CouchDB project releases source tarballs, and those are what we're voting 
on. Nick has, a courtesy, made Windows binaries available, but they clearly 
have some issues.

 [REGRESSION?] List throws error in CouchDB 1.6.0rc5, worked in 1.5.0
 

 Key: COUCHDB-2242
 URL: https://issues.apache.org/jira/browse/COUCHDB-2242
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
Reporter: Christian Lins

 A list function (see below) returns the following strangely formatted error 
 message:
 {noformat}
 {error:EXIT,reason:{{badmatch, {error,\n {enoent,\n
  [{erlang,open_port,\n  [{spawn,\n   
 \c:/Program Files (x86)/Apache Software 
 Foundation/CouchDB/lib/couch-1.6.0/priv/couchspawnkillable ./couchjs.exe 
 ../share/couchdb/server/main.js\},\n   
 [stream,{line,4096},binary,exit_status,hide]],\n  []},\n  
 {couch_os_process,init,1,\n  [{file,\n

 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_os_process.erl\},\n 
   {line,148}]},\n  
 {gen_server,init_it,6,[{file,\gen_server.erl\},{line,306}]},\n  
 {proc_lib,init_p_do_apply,3,\n  
 [{file,\proc_lib.erl\},{line,239}]}]}}},\n 
 [{couch_query_servers,new_process,3,\n  [{file,\n   
 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,477}]},\n  {couch_query_servers,lang_proc,3,\n  [{file,\n

 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,462}]},\n  {couch_query_servers,handle_call,3,\n  [{file,\n  
  
 \c:/cygwin/relax/apache-couchdb-1.6.0/src/couchdb/couch_query_servers.erl\},\n
{line,322}]},\n  
 {gen_server,handle_msg,5,[{file,\gen_server.erl\},{line,580}]},\n  
 {proc_lib,init_p_do_apply,3,[{file,\proc_lib.erl\},{line,239}]}]}}
 {noformat}
 It worked well with CouchDB 1.5.0, now after upgrade to 1.6.0-rc5 it does not.
 The design document with the functions:
 {noformat}
 {
_id: _design/report,
_rev: 1-604210c4db6dbc7ee800bef7f6cce5ac,
language: javascript,
version: 1,
views: {
activities-per-month: {
map: function(doc){var 
 type=doc._id.substr(0,doc._id.indexOf(':'));if(type==='de.bremer-heimstiftung.vera.activity'){var
  date=new 
 Date(parseInt(doc.activity.date.start,10));emit([1900+date.getYear(),date.getMonth()+1],doc)}}
}
},
lists: {
sum-up-keys: function(head,req){var row;var 
 cnt={};while(row=getRow()){if(cnt[row.key]===undefined){cnt[row.key]=1}else{cnt[row.key]+=1}}send(JSON.stringify(cnt))}
}
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (COUCHDB-2240) The replication manager should be smarter

2014-05-17 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson updated COUCHDB-2240:
---

Issue Type: New Feature  (was: Bug)
   Summary: The replication manager should be smarter  (was: Many 
continuous replications cause DOS)

The original title and issue type really amount to an acknowledgment that a 
server can be overwhelmed by client load, which is true of many things.

I've adapted the ticket to address the real problem, that the code that manages 
the _replicator databases insists on running all the jobs simultaneously. This 
should be configurable, and the replicator manager should cycle through jobs in 
some fashion to ensure all replications make progress.

When I pondered this before, I figured the smart thing to do for any 
continuous:true document in the _replicator database was to run each of them 
repeatedly without the continuous:true flag.

We might also go further and support different priority levels or ToS flags but 
the first version should simply break the 1-for-1 nature of _replicator.

 The replication manager should be smarter
 -

 Key: COUCHDB-2240
 URL: https://issues.apache.org/jira/browse/COUCHDB-2240
 Project: CouchDB
  Issue Type: New Feature
  Security Level: public(Regular issues) 
Reporter: Eli Stevens

 Currently, I can configure an arbitrary number of replications between 
 localhost DBs (in my case, they are in the _replicator DB with continuous set 
 to true). However, there is a limit beyond which requests to the DB start to 
 fail.  Trying to do another replication fails with the error:
 ServerError: (500, ('checkpoint_commit_failure', Target database out of 
 sync. Try to increase max_dbs_open at the target's server.))
 Due to COUCHDB-2239, it's not clear what the actual issue is. 
 I also believe that while the DB was in this state GET requests to documents 
 were also failing, but the machine that has the logs of this has already had 
 it's drives wiped. If need be, I can recreate the situation and provide those 
 logs as well.
 I think that instead of there being a single fixed pool of resources that 
 cause errors when exhausted, the system should have a per-task-type pool of 
 resources that result in performance degradation when exhausted. N 
 replication workers with P DB connections, and if that's not enough they 
 start to round-robin; that sort of thing. When a user has too much to 
 replicate, it gets slow instead of failing.
 As it stands now, I have a potentially large number of continuous 
 replications that produce a fixed rate of data to replicate (because there's 
 a fixed application worker pool that writes the data in the first place). We 
 use a DB+replication per batch of data to process, and if we receive a burst 
 of batches, then couchdb starts failing. The current setup means that I'm 
 always going to be playing chicken between burst size and whatever setting 
 limit we're hitting.  That sucks, and isn't acceptable for a production 
 system, so we're going to have to re-architect how we do replication, and 
 basically implement poor-man's continuous by doing one off replications at 
 various points of our data processing runs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-1592) Free space check for automatic compaction doesn't follow symlinks

2014-05-16 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1441#comment-1441
 ] 

Robert Newson commented on COUCHDB-1592:


The code is not merged yet, so that's unknown. The ticket was waiting on the 
reporter, that's you, to confirm the fix.


 Free space check for automatic compaction doesn't follow symlinks
 -

 Key: COUCHDB-1592
 URL: https://issues.apache.org/jira/browse/COUCHDB-1592
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.2
Reporter: Nils Breunese

 We've got a problem with automatic compaction not running due to low 
 diskspace according to CouchDB. According to our system administrators there 
 is more than enough space (more than twice the currently used space), but the 
 data directory is a symlink to the real data storage. It seems CouchDB is 
 checking the diskspace on the filesystem on which the symlink resides instead 
 of the diskspace on the linked filesystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2235) CouchDB logo location on sidebar: at the top or at the bottom?

2014-05-14 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13992773#comment-13992773
 ] 

Robert Newson commented on COUCHDB-2235:


The Fauxton team are working on a design, they have the expertise and are 
making the effor. The Fauxton team don't have to justify this decision to you, 
you are obliged to either veto it (with justification) or propose (and deliver) 
an alternative.

Please let's not promote our own personal opinions on UI and aesthetics to 
full-blown JIRA topics.


 CouchDB logo location on sidebar: at the top or at the bottom?
 --

 Key: COUCHDB-2235
 URL: https://issues.apache.org/jira/browse/COUCHDB-2235
 Project: CouchDB
  Issue Type: Question
  Security Level: public(Regular issues) 
  Components: Fauxton
Reporter: Alexander Shorin

 In COUCHDB-2234 was point about CouchDB logo location on sidebar. Why I said 
 that it's ugly and doubtful decision?
 1. Fauxton looses brand context. When you open the main page the hot eyes 
 spot is the sidebar top and middle of the page with database names. In fact, 
 you didn't see the logo at the bottom unless you'll look on it. Actually, no 
 one web site provides hot spot for bottom corners (except for the right one, 
 but that's Windows users specific) - you may easily ensure in that by reading 
 about eye tracking technique.
 Why this is bad? There is Fauxton for CouchDB, Cloudant, I know there is port 
 for PouchDB. Refuge.io may be also take Fauxton instead of Futon. Anyway, 
 there are couple of products which are uses Fauxton and actually they only 
 difference between by two moments: colour schema (if project has designer and 
 spent time to rewrite all the styles) and project logo (which is easily to 
 fix since you don't have to be designer or spend a lot of time for fixing 
 css). 
 So we have the quite awkward situation: we're opening Fauxton and we don't 
 know which product it belongs to until we explore all the corners. Also note, 
 that CouchDB logo in Fauxton is out of hot eyes spot since it shares overall 
 design colour schema.
 2.  Loosing functionality. In Futon and for old Fauxton sidebar the logo 
 served two proposes: branding and implicit button to collapse sidebar. Now 
 they are split into two different elements which causes:
 - reducing available space for real burger menu elements. Actually, for 
 height 768 px (13'' screen) there is no more free space for new menu items: 
 every new ones will causes scroll bar which makes design ugly.
 - reducing functionality. What's your expectations when you're clicking on 
 the logo? Returning back to home - that's intuitive and expected behaviour. 
 As for Futon UX, collapsing sidebar not much expected, but also not harmful. 
 Now the logo is just a nice picture that does nothing, but consumes valuable 
 visible space that could be used more effectively.
 As far as I see, there is no reasonable explanation of having logo at the 
 bottom of sidebar, so I wonder what was the reasons for doing this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Closed] (COUCHDB-2236) Weird _users doc conflict when replicating from 1.5.0 - 1.6.0

2014-05-14 Thread Robert Newson (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson closed COUCHDB-2236.
--

Resolution: Cannot Reproduce

 Weird _users doc conflict when replicating from 1.5.0 - 1.6.0
 --

 Key: COUCHDB-2236
 URL: https://issues.apache.org/jira/browse/COUCHDB-2236
 Project: CouchDB
  Issue Type: Bug
  Security Level: public(Regular issues) 
Reporter: Isaac Z. Schlueter

 The upstream write-master for npm is a CouchDB 1.5.0.  (Since it is locked 
 down at the IP level, we're not at risk to the DOS fixed in 1.5.1.)
 All PUT/POST/DELETE requests are routed to this master box, as well as any 
 request with `?write=true` on the URL.  (Used for cases where we still do the 
 PUT/409/GET/PUT dance, rather than using a custom _update function.)
 This master box replicates to a replication hub.  The read slaves all 
 replicate from the replication hub.  Both the /registry and /_users databases 
 replicate continuously using a doc in the /_replicator database.
 As I understand it, since replication only goes in one direction, and all 
 writes to go the upstream master, conflicts should be impossible.
 We brought a 1.6.0 read slave online, version 1.6.0+build.fauxton-91-g5a2864b.
 On this 1.6.0 read slave (and only there), we're seeing /_users doc 
 conflicts, and it looks like it has a different password_sha and salt.  Here 
 is one such example: https://gist.github.com/isaacs/63f332a15109bbfdb8ac  
 (actual passowors_sha and salt mostly redacted, but enough bytes left in so 
 that you can see they're not matching.)
 A few weeks ago, this issue popped up, affecting about 400 user docs, and we 
 figured that it had to do with some instability or human error at the time 
 when that box was set up.  We deleted all of the conflicts, and verified that 
 all docs matched the upstream at that time.  We removed the /_replicator 
 entries, and re-created them using the same script we use to create them on 
 all the other read slaves.
 If this was just one or two docs, or happening across more of the read 
 slaves, I'd be more inclined to think that it has something to do with a 
 particular user, or our particular setup.  However, the /_replicator docs are 
 identical in the 1.6.0 box as on the other read slaves.  This is affecting 
 about 150 users, and only on that one box.
 We've taken the 1.6.0 read slave out of rotation for now, so it's not an 
 urgent issue for us.  If anyone wants to log in and have a look around, I can 
 grant access, but I hope that there's enough information here to track it 
 down.  Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2235) CouchDB logo location on sidebar: at the top or at the bottom?

2014-05-10 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13992877#comment-13992877
 ] 

Robert Newson commented on COUCHDB-2235:


Understood, and you're certainly free to ask those questions, it's just we 
almost never use jira Questions for that. You are of course free to file a bug 
against Fauxton for this once it lands.

 CouchDB logo location on sidebar: at the top or at the bottom?
 --

 Key: COUCHDB-2235
 URL: https://issues.apache.org/jira/browse/COUCHDB-2235
 Project: CouchDB
  Issue Type: Question
  Security Level: public(Regular issues) 
  Components: Fauxton
Reporter: Alexander Shorin

 In COUCHDB-2234 was point about CouchDB logo location on sidebar. Why I said 
 that it's ugly and doubtful decision?
 1. Fauxton looses brand context. When you open the main page the hot eyes 
 spot is the sidebar top and middle of the page with database names. In fact, 
 you didn't see the logo at the bottom unless you'll look on it. Actually, no 
 one web site provides hot spot for bottom corners (except for the right one, 
 but that's Windows users specific) - you may easily ensure in that by reading 
 about eye tracking technique.
 Why this is bad? There is Fauxton for CouchDB, Cloudant, I know there is port 
 for PouchDB. Refuge.io may be also take Fauxton instead of Futon. Anyway, 
 there are couple of products which are uses Fauxton and actually they only 
 difference between by two moments: colour schema (if project has designer and 
 spent time to rewrite all the styles) and project logo (which is easily to 
 fix since you don't have to be designer or spend a lot of time for fixing 
 css). 
 So we have the quite awkward situation: we're opening Fauxton and we don't 
 know which product it belongs to until we explore all the corners. Also note, 
 that CouchDB logo in Fauxton is out of hot eyes spot since it shares overall 
 design colour schema.
 2.  Loosing functionality. In Futon and for old Fauxton sidebar the logo 
 served two proposes: branding and implicit button to collapse sidebar. Now 
 they are split into two different elements which causes:
 - reducing available space for real burger menu elements. Actually, for 
 height 768 px (13'' screen) there is no more free space for new menu items: 
 every new ones will causes scroll bar which makes design ugly.
 - reducing functionality. What's your expectations when you're clicking on 
 the logo? Returning back to home - that's intuitive and expected behaviour. 
 As for Futon UX, collapsing sidebar not much expected, but also not harmful. 
 Now the logo is just a nice picture that does nothing, but consumes valuable 
 visible space that could be used more effectively.
 As far as I see, there is no reasonable explanation of having logo at the 
 bottom of sidebar, so I wonder what was the reasons for doing this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2200) Support Erlang/OTP 17.0

2014-04-27 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13982314#comment-13982314
 ] 

Robert Newson commented on COUCHDB-2200:


+1 to merge, not that you need it under our committer rules.

 Support Erlang/OTP 17.0
 ---

 Key: COUCHDB-2200
 URL: https://issues.apache.org/jira/browse/COUCHDB-2200
 Project: CouchDB
  Issue Type: Improvement
  Security Level: public(Regular issues) 
  Components: Build System
Reporter: Dave Cottlehuber
Assignee: Dave Cottlehuber

 Requires patching configure.ac as usual, as major_version will change, and no 
 doubt other things too.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2200) Support Erlang/OTP 17.0

2014-04-27 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13982316#comment-13982316
 ] 

Robert Newson commented on COUCHDB-2200:


To be clear, I was speaking to Dave in my last comment.

 Support Erlang/OTP 17.0
 ---

 Key: COUCHDB-2200
 URL: https://issues.apache.org/jira/browse/COUCHDB-2200
 Project: CouchDB
  Issue Type: Improvement
  Security Level: public(Regular issues) 
  Components: Build System
Reporter: Dave Cottlehuber
Assignee: Dave Cottlehuber

 Requires patching configure.ac as usual, as major_version will change, and no 
 doubt other things too.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2227) Feature request: _all_docs?exclude_ddocs=true

2014-04-21 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975907#comment-13975907
 ] 

Robert Newson commented on COUCHDB-2227:


Bit of an odd request or, at least, an overly specific one. _all_docs is 
basically a view keyed on doc._id, so it would a conspicuous absence if one 
couldn't omit some rows from a view too. Since you can achieve the desired 
effect of this ticket with a list function, this is a Won't Fix, I think. At 
most, some exclude= parameter that works for _all_docs, _changes *and* views 
would be a useful addition, on the reasonable assumption that it would operate 
far faster than a list function (since it doesn't require a couchjs roundtrip) 
and would not require coding a design doc.


 Feature request: _all_docs?exclude_ddocs=true
 -

 Key: COUCHDB-2227
 URL: https://issues.apache.org/jira/browse/COUCHDB-2227
 Project: CouchDB
  Issue Type: Wish
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Nolan Lawson
Priority: Minor

 Design docs are included in {{\_all_docs}} results, which is by design (hyuk 
 hyuk).  However, this can be surprising and unwanted behavior for new users, 
 and plus, sometimes it's tricky to exclude them, e.g. if your docids come 
 both before and after the {{_}} character:
 {code:javascript}
 {total_rows:6,offset:0,rows:[
 {id:Bar,key:Bar,value:{rev:1-967a00dff5e02add41819138abb3284d}},
 {id:Foo,key:Foo,value:{rev:1-967a00dff5e02add41819138abb3284d}},
 {id:_design/temp,key:_design/temp,value:{rev:1-ee20bd300ce7ffa18e9ef1144fa50fd4}},
 {id:_design/temp2,key:_design/temp2,value:{rev:1-9b626494fef9a884a383345540c29e97}},
 {id:bar,key:bar,value:{rev:1-967a00dff5e02add41819138abb3284d}},
 {id:foo,key:foo,value:{rev:1-967a00dff5e02add41819138abb3284d}}
 ]}
 {code}
 What I would like is a query param like {{exclude_ddocs}}, which defaults to 
 false and would only return non-design documents, but otherwise function 
 exactly the same.  What {{offset}} and {{total_rows}} would do in this case 
 is up to you.
 Workaround: The best workaround is to do two separate queries, one with the 
 parameters {{endkey=%22_design/%22}} and the other with 
 {{startkey=%22_design0%22}}. But this is not particularly elegant.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (COUCHDB-2227) Feature request: _all_docs?exclude_ddocs=true

2014-04-21 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975909#comment-13975909
 ] 

Robert Newson commented on COUCHDB-2227:


to answer [~wohali]'s question, the only constraint for doc ids is that they're 
not empty and are valid UTF-8. Database names are constrained to lowercase.

 Feature request: _all_docs?exclude_ddocs=true
 -

 Key: COUCHDB-2227
 URL: https://issues.apache.org/jira/browse/COUCHDB-2227
 Project: CouchDB
  Issue Type: Wish
  Security Level: public(Regular issues) 
  Components: HTTP Interface
Reporter: Nolan Lawson
Priority: Minor

 Design docs are included in {{\_all_docs}} results, which is by design (hyuk 
 hyuk).  However, this can be surprising and unwanted behavior for new users, 
 and plus, sometimes it's tricky to exclude them, e.g. if your docids come 
 both before and after the {{_}} character:
 {code:javascript}
 {total_rows:6,offset:0,rows:[
 {id:Bar,key:Bar,value:{rev:1-967a00dff5e02add41819138abb3284d}},
 {id:Foo,key:Foo,value:{rev:1-967a00dff5e02add41819138abb3284d}},
 {id:_design/temp,key:_design/temp,value:{rev:1-ee20bd300ce7ffa18e9ef1144fa50fd4}},
 {id:_design/temp2,key:_design/temp2,value:{rev:1-9b626494fef9a884a383345540c29e97}},
 {id:bar,key:bar,value:{rev:1-967a00dff5e02add41819138abb3284d}},
 {id:foo,key:foo,value:{rev:1-967a00dff5e02add41819138abb3284d}}
 ]}
 {code}
 What I would like is a query param like {{exclude_ddocs}}, which defaults to 
 false and would only return non-design documents, but otherwise function 
 exactly the same.  What {{offset}} and {{total_rows}} would do in this case 
 is up to you.
 Workaround: The best workaround is to do two separate queries, one with the 
 parameters {{endkey=%22_design/%22}} and the other with 
 {{startkey=%22_design0%22}}. But this is not particularly elegant.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   3   4   5   6   7   8   9   >