Re: Large attachments

2010-11-25 Thread Robert Newson
David, I've failed to reproduce this locally by following your instructions. My memory usage was stable (OS X). Another user has tried the test on Linux with R13 and reports stable memory usage also. Can you provide more details of the OS, hardware and the manner in which you are monitoring the m

Re: Large attachments

2010-11-25 Thread evxdo
Quoting Benoit Chesneau : On Mon, Nov 22, 2010 at 3:51 PM, Bram Neijt wrote: Bit of a mis-understanding here, it is about downloads, not uploads. For example: dd if=/dev/urandom of=/tmp/test.bin count=5 bs=10240 Put test.bin as an attachment in a coucdb database Run for i in {0..50};do cu

Re: Large attachments

2010-11-23 Thread Benoit Chesneau
On Mon, Nov 22, 2010 at 3:51 PM, Bram Neijt wrote: > Bit of a mis-understanding here, it is about downloads, not uploads. > > For example: > dd if=/dev/urandom of=/tmp/test.bin count=5 bs=10240 > Put test.bin as an attachment in a coucdb database > Run > for i in {0..50};do curl http://localho

Re: Large attachments

2010-11-23 Thread Nils Breunese
Bram Neijt wrote: > Run > for i in {0..50};do curl http://localhost:5984/[test > database]/[doc_id]/test.bin > /dev/null 2>&1 & done > > This will create 50 curl processes which download from your couchdb. 51 actually. :o) Nils. ---

Re: Large attachments

2010-11-22 Thread Jan Lehnardt
On 22 Nov 2010, at 15:51, Bram Neijt wrote: > Bit of a mis-understanding here, it is about downloads, not uploads. > > For example: > dd if=/dev/urandom of=/tmp/test.bin count=5 bs=10240 > Put test.bin as an attachment in a coucdb database > Run > for i in {0..50};do curl http://localhost:59

Re: Large attachments

2010-11-22 Thread Bram Neijt
Bit of a mis-understanding here, it is about downloads, not uploads. For example: dd if=/dev/urandom of=/tmp/test.bin count=5 bs=10240 Put test.bin as an attachment in a coucdb database Run for i in {0..50};do curl http://localhost:5984/[test database]/[doc_id]/test.bin > /dev/null 2>&1 & done

Re: Large attachments

2010-11-22 Thread Robert Newson
Curl buffers binary uploads, depending on the manner you perform the operation. B. On Mon, Nov 22, 2010 at 2:03 PM, Bram Neijt wrote: > I can reproduce this problem: if I upload a 500 MB and start 10 > concurrent curl commands, memory usage increase dramatically with the > following environment:

Re: Large attachments

2010-11-22 Thread Bram Neijt
I can reproduce this problem: if I upload a 500 MB and start 10 concurrent curl commands, memory usage increase dramatically with the following environment: Description:Ubuntu 10.10 Release:10.10 Codename: maverick {"couchdb":"Welcome","version":"1.0.1"} Bram On Tue, Nov 16, 201

Re: Large attachments

2010-11-16 Thread evxdo
Well, I'm just doing a GET directly to the document_id + attachment: http://localhost:5984/database/doc_id/attachment Clicking on the attachment in Futon would have the same effect. David Quoting Jan Lehnardt : Hi David, On 16 Nov 2010, at 14:00, ev...@bath.ac.uk wrote: Hi everyone, I'm t

Re: Large attachments

2010-11-16 Thread Jan Lehnardt
Hi David, On 16 Nov 2010, at 14:00, ev...@bath.ac.uk wrote: > Hi everyone, > > I'm trying to work with some large attachments (around 1.5 GB). When I go to > download these (as a standalone attachment) the CouchDB process grows in size > by at least the size of the attachment before the downlo

Re: large attachments/huge databases ?`

2010-05-14 Thread Elf
My 5 cents too. Rsync is much more efficient than any ftp client. On May 14, 2010 12:10 PM, "c.Kleinhuis" wrote: hi, thank you for your description, and the project is only storing metadata :D the files will be updated via ftp syncronize :D i will post another question right now ... > > CK, >

Re: large attachments/huge databases ?`

2010-05-14 Thread c.Kleinhuis
hi, thank you for your description, and the project is only storing metadata :D the files will be updated via ftp syncronize :D i will post another question right now ... CK, My $0.02 on the storage of the videos is not to use CouchDB for that. Use Couch to store metadata on the files. Stuf

Re: large attachments/huge databases ?`

2010-05-13 Thread James Marca
On Thu, May 13, 2010 at 09:10:29PM +0100, Randall Leeds wrote: > If you want to be super couch-y and keep it HTTP based and keep couch > at the top of your server stack you could write your own http handler > that just streams files off the disk. > > http://mycouch:5984/_files/path_to_file > > Lo

Re: large attachments/huge databases ?`

2010-05-13 Thread Randall Leeds
If you want to be super couch-y and keep it HTTP based and keep couch at the top of your server stack you could write your own http handler that just streams files off the disk. http://mycouch:5984/_files/path_to_file Look at the other handlers in the couch.ini files and then look at the correspo

Re: large attachments/huge databases ?`

2010-05-13 Thread Cesar Delgado
CK, My $0.02 on the storage of the videos is not to use CouchDB for that. Use Couch to store metadata on the files. Stuff like filesystem path, server holding the file, date, time-stamp, video info, etc. The actual files are better stored in a filesystem somewhere else. It's kind of the ide

Re: large attachments/huge databases ?`

2010-05-13 Thread c.Kleinhuis
i need ALL versions :D from the beginning to current version, what about saving previous versions as an array field containing everything but the array field for saving the versions ? One nice thing about attachments is that history doesn't bloat the view server memory footprint (attac

Re: large attachments/huge databases ?`

2010-05-13 Thread J Chris Anderson
On May 13, 2010, at 12:12 PM, c.Kleinhuis wrote: > J Chris Anderson schrieb: >> On May 13, 2010, at 11:35 AM, Sebastian Cohnen wrote: >> >> >>> when you need versioning, you need to implement it explicitly. >>> >> >> >> The simplest versioning scheme is for the client to store the string

Re: large attachments/huge databases ?`

2010-05-13 Thread c.Kleinhuis
J Chris Anderson schrieb: On May 13, 2010, at 11:35 AM, Sebastian Cohnen wrote: when you need versioning, you need to implement it explicitly. The simplest versioning scheme is for the client to store the string representation of a document as served by CouchDB. While updating the d

Re: large attachments/huge databases ?`

2010-05-13 Thread J Chris Anderson
On May 13, 2010, at 11:35 AM, Sebastian Cohnen wrote: > > when you need versioning, you need to implement it explicitly. The simplest versioning scheme is for the client to store the string representation of a document as served by CouchDB. While updating the document contents, the original

Re: large attachments/huge databases ?`

2010-05-13 Thread Sebastian Cohnen
>>> -another point is general performance of about e.g. 200.000 documents in a >>> single >>> database ... how is disk usage when maintaining versioning of each document >>> ? >>> -can the versioning be deactivated or deleted ?! >>> >> >> Again, there is no "versioning" of documents - at lea

Re: large attachments/huge databases ?`

2010-05-13 Thread c.Kleinhuis
-he read that indexing is significantly higher than e.g. mysql - my answer was that indexing is not affecting performance because it is a one time action Right, once indices are generated, they are updated incrementally and very fast on access. ok, sounds good -another point

Re: large attachments/huge databases ?`

2010-05-13 Thread Sebastian Cohnen
Hey, On 13.05.2010, at 18:35, c.Kleinhuis wrote: > i need to convience my project manager ;) > > -he read that indexing is significantly higher than e.g. mysql - > my answer was that indexing is not affecting performance because it is a > one time action Right, once indices are generated,