AJ,
On Tue, Mar 23, 2010 at 2:51 PM, A.J. Brown a...@ajbrown.org wrote:
I'm having some trouble querying a view. Can you help me out?
startKey=[2e7768c509e896e658ecb75f3c1cf84c,null]endKey=[2e7768c509e896e658ecb75f3c1cf84c,{}]
Try using 'startkey' instead of 'startKey'.
If that doesn't
If you multiple (#/sec) by file size, are actually getting _better_
throughput with the larger files.
Do you know if ab command uses HTTP 1.1 pipelining? If not, HTTP
overhead would explain the extra time.
Your English is very clear. Please let me know if mine is not.
Regards,
Randall
On Tue,
Sorry,
To be clear
76.74 requests/sec * 378.912 kB/request = 28971.61152 kB/sec
701.9 requests/sec * 4.737 kB/request = 3324.9003 kB/sec
Larger files = better throughput. HTTP, TCP, or maybe B+Tree overhead.
On Wed, Mar 24, 2010 at 00:06, Randall Leeds randall.le...@gmail.com wrote:
If you
pipelining? you mean keepalive? ab hold test if you supply -k option (Use
HTTP KeepAlive feature) seems that couchdb's httpd don't know about this)
yes, throughput(in b/s) is better,but on localhost, if i use same test with
nginx i get around 1000 #/sec on 340k file. (344294.81 [Kbytes/sec]). yes,
Yes, I do mean KeepAlive, sorry for the confusion.
CouchDB should support it. Can you show a dump of the headers received
by Couch somehow? Maybe there is something silly like an issue of case
with the headers.
CouchDB cannot use sendfile (or some Erlang way to do the same)
because it puts bytes
On Wed, Mar 24, 2010 at 12:26 AM, Randall Leeds randall.le...@gmail.com wrote:
Maybe you could setuid a little program that calls nice(), drops privs and
then exec's couchjs? Set this up as your js view server.
I thought about that. But the main usage of CPU is done by couchdb
erlang core
On 24 Mar 2010, at 01:29, Roessner, Silvester wrote:
Hi all,
when I store 1000 copies of a pure JSON document (size 245,310 Bytes) in
a freshly created database,
the database itself is 0.9 GB big.
That is almost 4 times bigger than the actual netto payload.
Is this normal?
Yeah,
On, 24 Mar 2010, at 09:36, Jan Lehnardt [...@apache.org] wrote
Yeah, you'll want to run compaction to redice on-disk file size.
See http://wiki.apache.org/couchdb/Compaction for details.
I stored each of the copies as a separat document with a unique ID.
My goal was to estimate how much disk
On Wed, Mar 24, 2010 at 10:02 AM, Roessner, Silvester
silvester.roess...@vision.zeiss.com wrote:
On, 24 Mar 2010, at 09:36, Jan Lehnardt [...@apache.org] wrote
Yeah, you'll want to run compaction to redice on-disk file size.
See http://wiki.apache.org/couchdb/Compaction for details.
I stored
On Wed, Mar 24, 2010 at 09:21:31AM +0100, Benoit Chesneau wrote:
i would prefer to have a slower indexation and less CPU used.
nanosleep()?
about keepalive:
working:
req:
GET /uri HTTP/1.0
Connection: Keep-Alive
Host: somehost
User-Agent: someagent
Accept: */*
reply:
HTTP/1.1 200 OK
Date: Wed, 24 Mar 2010 09:44:36 GMT
Server: megaserver
Last-Modified: Sun, 22 Jul 2007 17:00:00 GMT
ETag: 4d6436-8cdd-435dd17316400
Accept-Ranges: bytes
iirc apachebench only speaks http/1.0 but uses a common violation to
support keep-alive. This likely confuses CouchDB which speaks
http/1.1.
keep-alive is also not the same as pipelining. keep-alive just reuses
connections, whereas HTTP pipelining sends multiple requests without
reading the
Hi,
I succeeded to install CouchDB 0.11.0 from
http://people.apache.org/~nslater/dist/0.11.0/apache-couchdb-0.11.0.tar.gz
But i have a problem with invalid UTF-8 JSON. In the attached file you
can see the problem
if you want more, i can provide you debug logs.
||
--
Germain Maurice
On Wed, Mar 24, 2010 at 11:25:10AM +0100, Germain Maurice wrote:
I succeeded to install CouchDB 0.11.0 from
http://people.apache.org/~nslater/dist/0.11.0/apache-couchdb-0.11.0.tar.gz
But i have a problem with invalid UTF-8 JSON.
What does
ls /usr/local/lib/couchdb/erlang/lib
show?
If
I checked the source. CouchDB will honor a Connection: Keep-Alive from
an HTTP 1.0 client (couch_httpd.erl's http_1_0_keep_alive/2).
I have measured the difference between serving static files from
apache2 vs. attachments from couchdb. It's always faster to do so via
apache2, and, on average,
Thank you Brian.
There were different versions of library i deleted them and i did a
make install then, most of tests works now.
Only two tests don't work, see them here - http://imgur.com/aCp2L.png
Brian Candler a écrit :
On Wed, Mar 24, 2010 at 11:25:10AM +0100, Germain Maurice wrote:
I have measured the difference between serving static files from
apache2 vs. attachments from couchdb. It's always faster to do so via
apache2, and, on average, couchdb was 2-4 times slower at serving the
same data as apache2.
in my case nginx vs couchdb 10x slowdown. on static files.
It's a code change to increase the chunk size, it's not currently a
configuration setting. When I was testing this I increased it to 64k
and 128k, it didn't make much difference (it's quite possible I didn't
do it correctly, though I did verify that I had larger chunks of
attachment data in the
Hmmm, how about turn off chunks for attachments? how about shema-less
concept?)
why think for FS? or problematic call smth like read N bytes from X pos in
db thru systemcall?
sorry i'm not dba-developer. only user(
2010/3/24 Robert Newson robert.new...@gmail.com
It's a code change to increase
On Wed, Mar 24, 2010 at 5:40 AM, Roessner, Silvester
silvester.roess...@vision.zeiss.com wrote:
On Wed, Mar 24 2010 at 10:06 Benoit Chesneau wrote:
Compact not only remove revisions but also holes in the b-tree. Also
make sure you use consecutive ids.
I tried it with consecutive ids as well
Hmm... Maybe I should write another view for the other level instead.
But is there a way to store a script in CouchDB to combine several views?
I would not want an external script since I would like to have the script being
replicated and synchronized along with the rest of the application...
Hi guy`s
i want ask about
in sql
UPDATE users SET name = 'Crazy' WHERE name == 'Song';
can i do like this in couch?
Thanks
Not directly.
The CouchDB pattern would be something like:
1. Define a view that allows you to fetch the docs you want.
2. Fetch the docs, possibly using ?include_docs=true
3. Update the docs in the client
4. Push all the docs back using _bulk_docs
If you have a lot of docs that are going to
Thanks for your answer.
If you want to be able to be able to include the same piece of code in
multiple views
Good idea, I'll do that.
What do you mean by a script to combine several views?
I mean that the implementations of algorithms that benefit from MapReduce
(TF.IDF, Markov chains, K
Hi, how to set bind address for a five unique IPs? (bind_address =
192.168.1.1 192.168.1.2 192.168.1.3 192.168.1.4 192.168.1.5)
--
Cairo Noleto
Visits http://www.caironoleto.com/
Cairo,
You can't. If you need to have a subset of your public IP's used then
you'll need to setup a proxy. CouchDB only natively supports a single
interface, with the special case that 0.0.0.0 means all interfaces.
I don't think it'd be out of the question to support a multiple
listener sockets
Folks who really need to chain map-reduced views have to hand-roll
their own solution. This usually involve some scripting to keep a
derived database up-to-date with the output of the source database's
view.
On Wed, Mar 24, 2010 at 2:20 PM, Aurélien Bénel aurelien.be...@utt.fr wrote:
Thanks for
Hi,
I hope there is a nice solution for this issue which I do not see (as
CouchDB newbie, who is getting more and more Couch addicted).
I have documents which may content text and/or references to other
documents. The client thus accesses an arbitrary document object, checks
the references made
Hello,
We have certain types of design documents within a database that
contain properties with values which must be unique. These design
documents are named in such a way I may do a _all_docs call to
retrieve them. I then check the _all_docs call (with include_docs =
true) to make sure when
if i follow
The CouchDB pattern would be something like:
1. Define a view that allows you to fetch the docs you want.
2. Fetch the docs, possibly using ?include_docs=true
3. Update the docs in the client
4. Push all the docs back using _bulk_docs
its will be to slow in my case
i have contents
Hi, still me with my 0.11.0 couchdb :)
I launched continuous replication between two hosts (more than 8
millions of documents), it takes long time and i'm ok with that.
My problem is that i launched another one shot replication between the
same databases as the previous replication.
So, i
When you have a new author, write that to a document. Use that
document's id to reference it in other docs maybe?
On Wed, Mar 24, 2010 at 17:03, faust faust...@gmail.com wrote:
if i follow
The CouchDB pattern would be something like:
1. Define a view that allows you to fetch the docs
On Mar 24, 2010, at 4:18 PM, Chris Stockton wrote:
Hello,
We have certain types of design documents within a database that
contain properties with values which must be unique. These design
documents are named in such a way I may do a _all_docs call to
retrieve them. I then check the
i means another case
i told: author change name and i must replace name in all related content.
2010/3/25 Randall Leeds randall.le...@gmail.com:
When you have a new author, write that to a document. Use that
document's id to reference it in other docs maybe?
On Wed, Mar 24, 2010 at 17:03,
What is limit size for couchDB doc?
i have a plane store in this model
author {
contents [
{title: 'Crazy Film', _attachment: 1GB},
{title: 'Crazy 2 Film', _attachment: 1,5GB}
]
}
is wright way?
Is there a way to force a PUT? I'm rebuilding documents from another source.
I know it's safe to overwrite them. Getting the revision of each document
would take a lot more time (since I'm trying to use bulk_docs to save all
the updates).
Thanks
I meant, do not store an author's name every where you reference it,
but store a uuid. The author's name then only appears in one place, on
the 'author' document.
But this is very SQL like de-normalization. While it'd be great if
Couch can accommodate you, perhaps your needs are very relational.
On 24 Mar 2010, at 17:20, Germain Maurice wrote:
Hi, still me with my 0.11.0 couchdb :)
I launched continuous replication between two hosts (more than 8 millions of
documents), it takes long time and i'm ok with that.
My problem is that i launched another one shot replication between the
On Wed, Mar 24, 2010 at 22:01, Jan Lehnardt j...@apache.org wrote:
On 24 Mar 2010, at 17:20, Germain Maurice wrote:
Hi, still me with my 0.11.0 couchdb :)
I launched continuous replication between two hosts (more than 8 millions of
documents), it takes long time and i'm ok with that.
My
39 matches
Mail list logo