David,
I've failed to reproduce this locally by following your instructions.
My memory usage was stable (OS X). Another user has tried the test on
Linux with R13 and reports stable memory usage also.
Can you provide more details of the OS, hardware and the manner in
which you are monitoring the m
Quoting Benoit Chesneau :
On Mon, Nov 22, 2010 at 3:51 PM, Bram Neijt wrote:
Bit of a mis-understanding here, it is about downloads, not uploads.
For example:
dd if=/dev/urandom of=/tmp/test.bin count=5 bs=10240
Put test.bin as an attachment in a coucdb database
Run
for i in {0..50};do cu
On Mon, Nov 22, 2010 at 3:51 PM, Bram Neijt wrote:
> Bit of a mis-understanding here, it is about downloads, not uploads.
>
> For example:
> dd if=/dev/urandom of=/tmp/test.bin count=5 bs=10240
> Put test.bin as an attachment in a coucdb database
> Run
> for i in {0..50};do curl http://localho
Bram Neijt wrote:
> Run
> for i in {0..50};do curl http://localhost:5984/[test
> database]/[doc_id]/test.bin > /dev/null 2>&1 & done
>
> This will create 50 curl processes which download from your couchdb.
51 actually. :o)
Nils.
---
On 22 Nov 2010, at 15:51, Bram Neijt wrote:
> Bit of a mis-understanding here, it is about downloads, not uploads.
>
> For example:
> dd if=/dev/urandom of=/tmp/test.bin count=5 bs=10240
> Put test.bin as an attachment in a coucdb database
> Run
> for i in {0..50};do curl http://localhost:59
Bit of a mis-understanding here, it is about downloads, not uploads.
For example:
dd if=/dev/urandom of=/tmp/test.bin count=5 bs=10240
Put test.bin as an attachment in a coucdb database
Run
for i in {0..50};do curl http://localhost:5984/[test
database]/[doc_id]/test.bin > /dev/null 2>&1 & done
Curl buffers binary uploads, depending on the manner you perform the operation.
B.
On Mon, Nov 22, 2010 at 2:03 PM, Bram Neijt wrote:
> I can reproduce this problem: if I upload a 500 MB and start 10
> concurrent curl commands, memory usage increase dramatically with the
> following environment:
I can reproduce this problem: if I upload a 500 MB and start 10
concurrent curl commands, memory usage increase dramatically with the
following environment:
Description:Ubuntu 10.10
Release:10.10
Codename: maverick
{"couchdb":"Welcome","version":"1.0.1"}
Bram
On Tue, Nov 16, 201
Well, I'm just doing a GET directly to the document_id + attachment:
http://localhost:5984/database/doc_id/attachment
Clicking on the attachment in Futon would have the same effect.
David
Quoting Jan Lehnardt :
Hi David,
On 16 Nov 2010, at 14:00, ev...@bath.ac.uk wrote:
Hi everyone,
I'm t
Hi David,
On 16 Nov 2010, at 14:00, ev...@bath.ac.uk wrote:
> Hi everyone,
>
> I'm trying to work with some large attachments (around 1.5 GB). When I go to
> download these (as a standalone attachment) the CouchDB process grows in size
> by at least the size of the attachment before the downlo
My 5 cents too.
Rsync is much more efficient than any ftp client.
On May 14, 2010 12:10 PM, "c.Kleinhuis" wrote:
hi, thank you for your description, and the project is only storing metadata
:D
the files will be updated via ftp syncronize :D
i will post another question right now ...
> > CK, >
hi, thank you for your description, and the project is only storing
metadata :D
the files will be updated via ftp syncronize :D
i will post another question right now ...
CK,
My $0.02 on the storage of the videos is not to use CouchDB for that. Use
Couch to store metadata on the files. Stuf
On Thu, May 13, 2010 at 09:10:29PM +0100, Randall Leeds wrote:
> If you want to be super couch-y and keep it HTTP based and keep couch
> at the top of your server stack you could write your own http handler
> that just streams files off the disk.
>
> http://mycouch:5984/_files/path_to_file
>
> Lo
If you want to be super couch-y and keep it HTTP based and keep couch
at the top of your server stack you could write your own http handler
that just streams files off the disk.
http://mycouch:5984/_files/path_to_file
Look at the other handlers in the couch.ini files and then look at the
correspo
CK,
My $0.02 on the storage of the videos is not to use CouchDB for that. Use
Couch to store metadata on the files. Stuff like filesystem path, server
holding the file, date, time-stamp, video info, etc. The actual files are
better stored in a filesystem somewhere else. It's kind of the ide
i need ALL versions :D from the beginning to current version, what about saving
previous versions as an array field containing everything but the array field
for saving the versions ?
One nice thing about attachments is that history doesn't bloat the view server
memory footprint (attac
On May 13, 2010, at 12:12 PM, c.Kleinhuis wrote:
> J Chris Anderson schrieb:
>> On May 13, 2010, at 11:35 AM, Sebastian Cohnen wrote:
>>
>>
>>> when you need versioning, you need to implement it explicitly.
>>>
>>
>>
>> The simplest versioning scheme is for the client to store the string
J Chris Anderson schrieb:
On May 13, 2010, at 11:35 AM, Sebastian Cohnen wrote:
when you need versioning, you need to implement it explicitly.
The simplest versioning scheme is for the client to store the string
representation of a document as served by CouchDB. While updating the d
On May 13, 2010, at 11:35 AM, Sebastian Cohnen wrote:
>
> when you need versioning, you need to implement it explicitly.
The simplest versioning scheme is for the client to store the string
representation of a document as served by CouchDB. While updating the document
contents, the original
>>> -another point is general performance of about e.g. 200.000 documents in a
>>> single
>>> database ... how is disk usage when maintaining versioning of each document
>>> ?
>>> -can the versioning be deactivated or deleted ?!
>>>
>>
>> Again, there is no "versioning" of documents - at lea
-he read that indexing is significantly higher than e.g. mysql -
my answer was that indexing is not affecting performance because it is a
one time action
Right, once indices are generated, they are updated incrementally and very fast
on access.
ok, sounds good
-another point
Hey,
On 13.05.2010, at 18:35, c.Kleinhuis wrote:
> i need to convience my project manager ;)
>
> -he read that indexing is significantly higher than e.g. mysql -
> my answer was that indexing is not affecting performance because it is a
> one time action
Right, once indices are generated,
22 matches
Mail list logo