So, for example trials where
(parameters['parameter1'] == 10 AND parameters['parameter2'] = 42)
Whether this can be accomplished inside CouchDB or not, it should be
possible to do inside a Lucene index based off CouchDB data. I don't
currently do the right thing for numbers in range queries, but
couchdb-lucene uses [externals] to receive queries from the client and
it currently polls all_docs_by_seq for updates. This seems to match
Lucene's batch-oriented model anyway, so I've not looked deeply into
the update_notification option, etc.
B.
On Tue, Feb 10, 2009 at 5:16 PM, Barry Wark
Compaction also makes the file non-sparse.
The 83Gb is probably not accurate, I suggest you compare what 'ls -lh'
shows versus what 'du -sh' does for the same file. I find the actual
consumed space is far, far less that 'ls' shows. CouchDB .couch files
are very sparse, large gaps of unwritten
Wouldn't clock correction/manual setting potentially render a
YYMMDDHHMMSS approach collision-prone?
It's an issue I recently faced (though in Java) and the details are
interesting.
On Linux, at least, the kernel is able to access a few different
notions of time, the most typical one is the wall
fwiw: it might make a more natural query language into couchdb-lucene
than Lucene's default query syntax. Not all of it applies, but there's
some decent overlap.
B.
On Tue, Mar 10, 2009 at 12:40 PM, Dean Landolt d...@deanlandolt.com wrote:
On Tue, Mar 10, 2009 at 7:34 AM, Christopher Lenz
That would be me...
The Rhino integration in couchdb-lucene is constrained to providing a
user-defined transformation function when indexing documents, I assume
you're doing something more elaborate?
All that code in couchdb-lucene is in Rhino.java fwiw.
B.
On Mon, Mar 30, 2009 at 11:59 PM,
I was just reading this (at
http://wiki.apache.org/couchdb/Formatting_with_Show_and_List);
Show and list functions are side effect free and idempotent. *They
can not make additional HTTP requests against CouchDB*. Their purpose
is to render JSON documents in other formats.
So they can't, and
, Samuel Wan s...@samuelwan.com wrote:
Thanks. What about validate_doc_update? My goal was to check a hashed
key in the document submitted by a user by comparing the key with a
stored document when the validate_doc_update is called.
-Sam
On Thu, Apr 23, 2009 at 9:33 AM, Robert Newson
You can also pass stale=ok if you don't need the latest results from a view.
http://wiki.apache.org/couchdb/HTTP_view_API
B.
On Mon, Jun 22, 2009 at 3:46 PM, Sergey Shepelevtemo...@gmail.com wrote:
From my poor experience with couch, i think that your task (ever expanding
set of data, no
database, test your
queries there while they're fast to build and then use them on big database.
On Mon, Jun 22, 2009 at 7:29 PM, Robert Newson robert.new...@gmail.comwrote:
You can also pass stale=ok if you don't need the latest results from a
view.
http://wiki.apache.org/couchdb/HTTP_view_API
wouldn't emitting the start timestamp as the key solve the problem?
(if you need both, have two views).
On Fri, Jul 3, 2009 at 12:44 PM, Nils Breunesen.breun...@vpro.nl wrote:
Hello all,
We set out to use CouchDB as a piece of software that can easily be used to
create REST APIs. We publish
Your goal is achievable with couchdb-lucene
(http://github.com/rnewson/couchdb-lucene), fwiw.
That is, you would add all of the tags for each document to a
full-text view with;
{
_id:lucene,
fulltext: {
tags: {
index:function(doc) { var ret=new Document();
ret.add(doc.tags);
Hi,
Can you file an issue with the errors you're getting to:
http://github.com/rnewson/couchdb-lucene/issues
If it was related to bad utf8, this problem has been fixed on the
master branch since 0.3.
B.
On Wed, Jul 8, 2009 at 8:30 AM, Nitin Borwankarni...@borwankar.com wrote:
Greetings
Hi,
You've probably realized by now that couchdb views and couchdb-lucene
indexes are completely separate. It is not currently possible to use
couchdb-lucene on the output of a couch view.
B.
On Mon, Jul 13, 2009 at 2:38 PM, Wolfgang
issovitswolfgang.issov...@gmail.com wrote:
Thanks, your tip
Well, that depends on what you're attempting with numbers.
For the record, if you index numeric values then sorting already works
correctly. That is if you called doc.add(1) and doc.add(10), the order
would be correct when you sorted. What won't be correct is a range
query (q=[1 TO 10]) as that's
I don't think that will work. the new Date().getTime() is evaluated
once for each document, so your expectation that documents will fall
out of the view as time moves on will not be met; unchanged documents
are not updated in the view.
All I can think of is two views with a client-side join.
Adding _show and _list support is on my list for the 0.4 release. If I
get time at the weekend, I might get this done.
B.
On Fri, Aug 7, 2009 at 11:37 AM, Nitin Borwankarni...@borwankar.com wrote:
Hello all,
I am working on a BibJSON browsing couchapp, for which couchdb-lucene search
is
You can also change bind_address in the configuration to 0.0.0.0 (to
listen on all interfaces) or the IP address of your external facing
interface. Then you can reach it with http://hostname:5984/_utils
B.
On Thu, Aug 13, 2009 at 3:00 PM, Paul Davispaul.joseph.da...@gmail.com wrote:
Assuming
I can only speak to the new continuous replication. After a few days
of stress testing and bug fixes from the team, it's really pretty
amazing.
b.
On Tue, Aug 25, 2009 at 10:17 PM, Miles
Fidelmanmfidel...@meetinghouse.net wrote:
Blair Zajac wrote:
Hello,
We're looking at using CouchDB's
Multiply your numbers by the amount of precision you need and use
integers (*1000 for 3 d.p)? Using floating point to store money
amounts seems fraught with rounding errors.
B.
On Tue, Sep 1, 2009 at 9:58 AM, Metin Akatakat.me...@gmail.com wrote:
/* Paste this in a code editor to have code
Hi,
The index function looks correct so I would suggest you check what
content type couchdb thinks your attachment is. If it's not in the
support list of content types, then it explains the lack of matches.
B.
On Sat, Sep 5, 2009 at 3:03 AM, Paul Joseph
Davispaul.joseph.da...@gmail.com wrote:
to force the handling by lucene for a peculiar
mime-type?
My first tries were for documents which couchdb mime-type was
text/x-patch,
which you can obviously guess the usability :p
Robert Newson wrote:
Hi,
The index function looks correct so I would suggest you check what
content type
What version of couchdb-lucene are you using?
The simplest way to clear this if it's not happening automatically
(which it should), is to stop couchdb, delete the lucene/ directory,
and restart. The index will be rebuilt.
B.
On Mon, Sep 14, 2009 at 6:11 PM, Michael McCaffrey
The first thing I'd suspect is a Javascript syntax or runtime error in
your function (but, hey, I would say that, right?).
This stack trace is at the point where user-entered data hits the
runway, as it were, so it's not so unreasonable. Perhaps you could
paste your fulltext function?
B.
On
I haven't checked but it's conceivable that couchdb4j (since it uses
httpclient) is issuing an Expect: continue header, and couchdb,
until very recently, treated that (incorrectly) case-sensitively,
causing an unnecessary wait for a timeout.
A trace of http request/response headers for a single
documents do you have? Does the deletion size vary with # of
docs?
Try setting delayed_commits=true in your config and see if that helps.
Chris
On 5 okt 2009, at 17:14, Robert Newson wrote:
I haven't checked but it's conceivable that couchdb4j (since it uses
httpclient) is issuing
, Robert Newson wrote:
Isn't couchdb (at least in the Debian package) monitored by heart?
B.
On Mon, Oct 5, 2009 at 6:05 PM, Nicholas Orr nicholas@zxgen.net
wrote:
great!
i was wondering what to put for the test conditions.
Yours work well, so thanks to you as well ;)
Nick
On Tue
Me!
On Thu, Oct 22, 2009 at 8:13 PM, Paul Davis paul.joseph.da...@gmail.com wrote:
Who's in?
http://bit.ly/1sGHyF
Paul Davis
fwiw couchdb-lucene 0.5 will have better numeric support.
B.
On Fri, Oct 30, 2009 at 1:59 PM, Adam Kocoloski kocol...@apache.org wrote:
On Oct 30, 2009, at 1:36 PM, Duy Nguyen wrote:
Hi guys,
I have a troublesome sql query that needs to translate to couchDB
map/reduce
SELECT * FROM
What HTTP client are you using?
On Tue, Nov 3, 2009 at 11:06 AM, Sebastian Negomireanu
sebastian.negomire...@justdesign.ro wrote:
Ok I will try that and come back with results.
Best regards,
Sebastian Negomireanu | CTO / Managing Partner JustDesign Sibiu, Romania
+40-726-181186 |
Message-
From: Robert Newson [mailto:robert.new...@gmail.com]
Sent: Tuesday, November 03, 2009 2:22 PM
To: user@couchdb.apache.org
Subject: Re: Performance issue
What HTTP client are you using?
On Tue, Nov 3, 2009 at 11:06 AM, Sebastian Negomireanu
sebastian.negomire...@justdesign.ro wrote
see how much smaller the database gets once you compact it. :)
On Tue, Nov 10, 2009 at 5:04 PM, Ben Cohen nco...@ucsd.edu wrote:
I've been lurking on the list for awhile -- I like the design of couchdb and
am taking a look to see if I can use it in any upcoming projects.
I made a little
Did you emit your JSON response on a single line?
On Thu, Nov 19, 2009 at 10:42 PM, Jim Woodgate jdwo...@gmail.com wrote:
I finally have a basic external program running, but I find that if I
return 57 ids it works, but if I return 58 ids or more I get an error.
Is there a way to tell what
wrote:
On Thu, Nov 19, 2009 at 5:10 PM, Robert Newson robert.new...@gmail.com
wrote:
Did you emit your JSON response on a single line?
Yes it's all one line.
On Thu, Nov 19, 2009 at 10:42 PM, Jim Woodgate jdwo...@gmail.com wrote:
I finally have a basic external program running, but I find
Verify that you've hooked up the indexer (under update_notification)
and verify that it has built indexes (you should find a directory
called 'lucene').
Also check both the couchdb.log and couchdb-lucene.log for errors.
B.
On Sat, Nov 28, 2009 at 1:38 PM, Smrchy smr...@gmail.com wrote:
Hi,
i
get a .../lucene/by_name is not a valid view
when i call it.
Hope this helps
On Sat, Nov 28, 2009 at 2:48 PM, Robert Newson robert.new...@gmail.comwrote:
Verify that you've hooked up the indexer (under update_notification)
and verify that it has built indexes (you should find a directory
I'm not sure the API is quite as you say. When using feed=continuous
you do see all the changes, since it's just being passed to an event
listener of some kind. Where you do, definitely, see 'gaps' is when
you use _changes retrospectively.
1) create a new database.
2) create 4 documents (say).
3)
time by
compaction, so it's not like they could ever be guaranteed.
B.
On Sun, Nov 29, 2009 at 11:02 PM, Chris Anderson jch...@apache.org wrote:
On Sun, Nov 29, 2009 at 2:33 PM, Robert Newson robert.new...@gmail.com
wrote:
I'm not sure the API is quite as you say. When using feed=continuous
I still need to see your design document. :)
Either respond to the issue you created at github
(http://github.com/rnewson/couchdb-lucene/issues#issue/31) or follow
up here, not both.
B.
On Fri, Dec 11, 2009 at 3:43 AM, Mark Gallop mark.gal...@gmail.com wrote:
Hi all,
Hope it is ok to post
The README describes how to configure that. In 0.4, it's a system
property and in 0.5 it is (or, rather, will be when it's released) a
setting in a properties file.
B.
On Wed, Dec 16, 2009 at 11:26 AM, [mRg] emar...@googlemail.com wrote:
Hi all,
I was wondering if anyone knew of a way of
Is your source machine locked down with admin passwords? If so, the
other machine can't read your design documents, you'll need to
authenticate the replication task.
B.
On Fri, Dec 18, 2009 at 3:09 PM, Robert Campbell rrc...@gmail.com wrote:
I have a small database (only 20 docs) and I'm trying
the destination machine
is locked down. I log in to the destination machine, then select the
remote (source, open) database to copy from, then the local, empty DB
to copy into.
On Fri, Dec 18, 2009 at 6:31 PM, Robert Newson robert.new...@gmail.com
wrote:
Is your source machine locked down
for 0.4 - indexing should start when you start couchdb, assuming you
added the update_notification settings.
for 0.5 (not released) - indexing starts on the first query, just like
view indexing does in couchdb. So just trigger the build by running a
query (any query) just like the same rule for
definitely taking longer
than 60 seconds.
-Patrick
On 23/12/2009 12:02 AM, Robert Newson robert.new...@gmail.com wrote:
for 0.4 - indexing should start when you start couchdb, assuming you
added the update_notification settings.
for 0.5 (not released) - indexing starts on the first query, just
Try database compaction?
B.
On Mon, Feb 1, 2010 at 4:27 PM, Santi Saez santis...@woop.es wrote:
Hi,
I'm doing some initial tests with CouchDB, trying to store 2^32 IP addresses
(approximately 4.3 billions of documents).
Documents have only required fields: _id and _rev, but I've noticed
compaction should reduce disk usage even without updates or deletes,
but that is probably not true for 0.8. odd that you get the exact same
byte count after compaction...
On Mon, Feb 1, 2010 at 4:52 PM, Santi Saez santis...@woop.es wrote:
El 01/02/10 17:31, Robert Newson escribió:
Try database
1) it's reduce(key, values, rereduce). The method should be called
with 1 or more values for the same key, which you can then reduce to a
summary value. It's called 'reduce' because the result must be smaller
than the input. Building a result as large as the input (in fact, as
large as the sum of
I think create_target:true was added after 0.10 and is part of the next release.
B.
On Tue, Feb 9, 2010 at 4:51 PM, Dan Smythe xkeita...@gmail.com wrote:
All --
In my testing, it appears that when create_target:true is sent, CouchDB
still does not create the target database.
For Example:
startkey not startKey, endkey not endKey.
B.
On Thu, Feb 11, 2010 at 6:31 PM, Sean Clark Hess seanh...@gmail.com wrote:
Hi, I need to run through every record in a database, and I want to do it in
chunks so ruby doesn't collapse and die when it runs out of memory.
As far as I know, to page
I'm afraid it's not possible (anyone that thinks they can solve this,
please speak up!)
couchdb-lucene allows Javascript functions specifically so you can
duplicate code from your normal couch views so you can achieve the
illusion of joins without the (imho) intractable performance issues
that
Hi,
#1 remove the update_notification section from the ini file.
#2 nothing will happen until you query your fulltext view, just the
same as couchdb views.
I'm working on packaging 0.5 for a release so I'll be ensuring that
log output goes to a sensible location for most installations (from
Glad it's working for you, you're quite welcome.
B.
On Fri, Feb 26, 2010 at 10:19 AM, Bruno Ronchetti
bruno.ronche...@mac.com wrote:
Robert,
I am an idiot - I had not restarted couchdb-lucene
Now that it is running, everything works as expected.
Thanks for your patience.
Regards.
You can query with stale=ok and the view won't change (as long as no
other call happens without stale=ok). You'll have to call without
stale=ok sometimes, though, so you'll still need to take care. Does
that help?
B.
On Fri, Feb 26, 2010 at 11:27 AM, Jens Alfke j...@mooseyard.com wrote:
If an
I'm hoping to ship an .rpm and a .deb for the 0.5 release. I'm away on
business so I haven't made any progress on that yet.
B.
On Tue, Mar 2, 2010 at 7:20 AM, Markus Jelsma mar...@buyways.nl wrote:
Maven won't work properly using gcj, use Sun Java instead.
you may need to add -Xmx1g to your java line to make this work;
B.
On Wed, Mar 3, 2010 at 9:36 PM, km srikrishnamo...@gmail.com wrote:
please check if
1) ur dir to store indexes (indexes dir ) is properly set in the local.ini
config file
2) ownership of that indexes dir set to couchdb user
Can you include some of the log output?
A coordinated failure like this points to external factors but log
output will help in any case.
B.
On Fri, Mar 5, 2010 at 7:18 AM, Peter Bengtson pe...@peterbengtson.com wrote:
We have a cluster of servers. At the moment there are three servers, each
fwiw: I use a cron job to establish continuous replication precisely
because they are not persistent. POST'ing to _replicate with the same
source and target is idempotent, so a cron job that mindlessly
resubmits all your replication tasks is harmless.
I go further, since I use pairs of servers,
Can you include your fulltext function? You can programmatically add
any content to the index, so you might work around this by converting
your array to a string yourself. If you show the function, I can
verify if it's a bug in the function or in couchdb-lucene's conversion
rules.
B.
On Tue, Mar
On Wed, Mar 10, 2010 at 1:53 AM, Robert Newson robert.new...@gmail.comwrote:
Can you include your fulltext function? You can programmatically add
any content to the index, so you might work around this by converting
your
I'm pretty sure that two machines writing to the same .couch file
would be disastrous (i.e, pretty complete data loss). Using separate
databases to the same SAN with replication between them would work,
but doesn't do what the OP is asking.
B.
On Mon, Mar 15, 2010 at 10:25 AM, Brian Candler
couchdb-lucene 0.5 can also do bounding box searches;
?q=pizza AND latdouble:[51.4707 TO 51.5224] AND
longdouble:[-0.6622 TO -0.5775]
0.5 isn't released (yet) but it's fairly stable at this point.
B.
On Mon, Mar 15, 2010 at 4:50 PM, Christopher O'Connell
jwritec...@gmail.com wrote:
You cannot
transfer encoding negotiation is pretty standard. Your request didn't
say you can accept compressed responses, so couch is inflating it for
you. The fix really is to add the Accept-Encoding header. CouchDB
actually stores the data compressed if you send it that way too.
B.
On Wed, Mar 17, 2010
iirc apachebench only speaks http/1.0 but uses a common violation to
support keep-alive. This likely confuses CouchDB which speaks
http/1.1.
keep-alive is also not the same as pipelining. keep-alive just reuses
connections, whereas HTTP pipelining sends multiple requests without
reading the
this request without any problems.
why couchdb not?
2010/3/24 Robert Newson robert.new...@gmail.com
iirc apachebench only speaks http/1.0 but uses a common violation to
support keep-alive. This likely confuses CouchDB which speaks
http/1.1.
keep-alive is also not the same as pipelining. keep
It's a code change to increase the chunk size, it's not currently a
configuration setting. When I was testing this I increased it to 64k
and 128k, it didn't make much difference (it's quite possible I didn't
do it correctly, though I did verify that I had larger chunks of
attachment data in the
I am wondering why not introduce locking in couchdb
It's because locking doesn't scale. The locking strategy you outlined
works fine when your database runs on one machine, but fails when it
runs on two or more machines. A distributed lock, while possible,
would require all machines to lock,
with sharding coming :(.
On 28.03.2010, at 18:40, Robert Newson wrote:
I am wondering why not introduce locking in couchdb
It's because locking doesn't scale. The locking strategy you outlined
works fine when your database runs on one machine, but fails when it
runs on two or more machines
Deploying a new design document, with the updated view functions,
building the views, and then using HTTP COPY to copy the new design
document over the old one, allows for this already, regardless of
whether you use URL rewriting.
curl http://localhost:5984/db5/_design/foo -X PUT -d '{}'
curl
the db is called
data/mydb-prod?
B.
On Wed, Mar 31, 2010 at 6:46 PM, Craig Blake
cra...@compasspointtech.net wrote:
Shouldn't a replication of the database ignore them? Or is there a step I
need to do to filter those docs out explicitly?
Thanks,
Craig
On Mar 31, 2010, at 11:42 AM, Robert
On reflection, I (partially) retract that. It works for the default
group_level setting so it implicitly does what you need. A reduce that
ignores all the input parameters is going behave oddly for different
group_level settings.
On Mon, Apr 5, 2010 at 9:03 PM, Robert Newson robert.new
I don't think your reduce is making the results unique. Rather, it's
non-deterministically discarding rows. Where couchdb calls the reduce
method, all of the input rows it's selected (outside of your control)
are reduced to 'true'. I think it just appears to be working but
isn't.
Further, I don't
I second the call to see a mockup. I don't dislike the page as much as
the OP but clearly it could be refreshed.
B.
On Tue, Apr 13, 2010 at 12:24 PM, Paul Davis
paul.joseph.da...@gmail.com wrote:
I always quite liked it.
Anyway, this is open sauce as they say. The quickest way to changing
Very nice design. s/geared for the web/designed for the web/ ?
On Tue, Apr 13, 2010 at 9:41 PM, Paweł Stawicki pawelstawi...@gmail.com wrote:
I like the new design. Font is not so important for me, can be old one, can
be new one.
--
Paweł Stawicki
http://pawelstawicki.blogspot.com
Hi,
0.5 does indeed use _changes to incrementally update the Lucene
indexes; it should *not* be starting over unless you delete the index
or change the index functions. 0.5 is under active development so I'm
very keen to hear about this bug. I'm attempting to reproduce it
locally now.
B.
On
Make sure you're up to date. the ini file no longer has a log entry,
the log output location is in the log4j.xml file. If you unzip a newly
built zip file ('mvn' will build one for you) it should log to a file
in the logs/ folder.
I've verified that c-l does not start over when restarted with the
When you redo the schema diagram, it's probably time to drop the
lucene box, since it's not part of couch.
B.
On Wed, Apr 14, 2010 at 12:25 PM, James Fisher jameshfis...@gmail.com wrote:
OK, I'm still at the messing-around-in-Inkscape stage, but this is how
things stand (ignore the schema
Ok, I think I understand this now.
When you start couchdb-lucene on a database for the first time (and
after a restart), it looks at the update_seq of all the Lucene indexes
it has on disk and takes the lowest number of these. It then uses that
in a call to _changes?since=N.
My suspicion is you
number will
tell me a lot.
B.
On Wed, Apr 14, 2010 at 1:29 PM, Manokaran K manoka...@gmail.com wrote:
On Wed, Apr 14, 2010 at 5:49 PM, Robert Newson robert.new...@gmail.comwrote:
Ok, I think I understand this now.
When you start couchdb-lucene on a database for the first time (and
after
Hi,
You need to modify log4j.xml and change the word INFO to DEBUG and
then restart couchdb. Please send all the output that it gives.
B.
On Wed, Apr 14, 2010 at 1:58 PM, Manokaran K manoka...@gmail.com wrote:
I tried with the latest src. This time its starts from update_seq 7621.
There's a
, 2010 at 8:08 PM, Robert Newson
robert.new...@gmail.comwrote:
Hi,
You need to modify log4j.xml and change the word INFO to DEBUG and
then restart couchdb. Please send all the output that it gives.
Its here: http://pastie.org/921404
regds,
mano
I did one more restart and the following
I can't reproduce this. My setup always picks up where I left off, so
there must be some step I'm not doing to trigger this.
Can you delete the target/indexes and reproduce this from scratch? If
so, could you list all the steps?
B.
On Thu, Apr 15, 2010 at 5:55 PM, Robert Newson robert.new
there's also authbind.
http://en.wikipedia.org/wiki/Authbind
On Thu, Apr 15, 2010 at 5:15 PM, Mikhail A. Pokidko
mikhail.poki...@gmail.com wrote:
On Thu, Apr 15, 2010 at 7:54 PM, Noah Slater nsla...@me.com wrote:
You don't.
Technically you can - you can start with root privileges to bind to
Something like;
map;
function(doc) {
emit([doc.name, doc.timestamp], null);
}
no reduce method.
with calls like;
http://localhost:5984/db/_design/ddoc/_view/view?startkey=[name,{}]endkey=[name]descending=truelimit=1
should get you the latest (highest timestamp) for document with
doc.name of
.
B.
On Fri, Apr 16, 2010 at 9:05 AM, Manokaran K manoka...@gmail.com wrote:
On Thu, Apr 15, 2010 at 10:30 PM, Robert Newson
robert.new...@gmail.comwrote:
I can't reproduce this. My setup always picks up where I left off, so
there must be some step I'm not doing to trigger this.
Can you
for this today; I'll just add an empty document if
there was no other change.
B.
On Fri, Apr 16, 2010 at 10:20 AM, Manokaran K manoka...@gmail.com wrote:
On Fri, Apr 16, 2010 at 1:51 PM, Robert Newson robert.new...@gmail.comwrote:
That's more interesting. IIRC, Lucene's commit() method will only
write
This is now fixed on the master branch.
I force a document addition if there wasn't one since the last commit.
You'll see it in doc_count for index functions that don't index
anything.
B.
On Fri, Apr 16, 2010 at 12:10 PM, Robert Newson robert.new...@gmail.com wrote:
Yes, that would be better
Hi,
I cut the 0.5.0 release of couchdb-lucene today. Lots of changes and
improvements; take care when upgrading from 0.4! The most notable
difference between 0.4 and 0.5 is that 0.5 runs as a standalone daemon
(whereas 0.4 was launched by couchdb's externals feature).
Bug fixes to 0.5.0 will
Oops. :) Will fix.
On Sat, Apr 17, 2010 at 4:54 PM, Sebastian Cohnen
sebastiancoh...@googlemail.com wrote:
very nice work, robert! :)
but you forgot to update the README (e.g. you can now remove the big fat
warning, that 0.5 is not yet released) :)
On 17.04.2010, at 16:22, Robert Newson
You could also do this;
1) GET /db/my_counter_doc
1a) if 404, PUT /db/my_counter_doc -d '{counter:0}'
1b) if 200, PUT /db/my_counter_doc -d '{counter:counter+N,
_rev:rev from 1a}'
1c) repeat step 1 if 1b returned 409.
2) use numbers from counter to counter+N for doc ids.
This is a common
Hi,
Could you describe the changes you've made to your design document in
more detail? A step-by-step procedure to reproduce the problem would
also help me immensely.
Thanks,
B.
On Mon, May 3, 2010 at 12:25 PM, Bruno Ronchetti
bruno.ronche...@mac.com wrote:
Hi everyone,
I intend to work with
The HEAD behavior of trunk is much improved over 0.5.0. You should now
receive a sensible Content-Length header as long as you didn't upload
already compressed attachments and then download them without
compression (as couchdb, in this case, does not know the uncompressed
length).
B.
On Sun, May
bah, of course I mean 0.11.0 (forgive me!)
B.
On Sun, May 9, 2010 at 8:48 PM, Robert Newson robert.new...@gmail.com wrote:
The HEAD behavior of trunk is much improved over 0.5.0. You should now
receive a sensible Content-Length header as long as you didn't upload
already compressed
If it helps, you can only group from the left side of the array. for
['a', 'b', 'c'], group_level=1 is ['a'], group_level=2 is ['a', 'b']
and group_level=3 is ['a', 'b', 'c'].
B.
On Thu, May 20, 2010 at 1:25 PM, Kropp, Henning hkr...@microlution.de wrote:
Am 20.05.2010 12:25, schrieb Simon
The potential inability to complete compaction in write-saturated
environments is captured in
http://issues.apache.org/jira/browse/COUCHDB-487 with a patch.
I think kocolosk has recently written a patch that improves the way
that data needs to be read to perform compaction, which in turn
reduces
I succeeding in preventing compaction completion back in the 0.9 days
but I've been unable to reproduce since 0.10 onwards. compaction
retries until it succeeds (or you hit the end of the disk). I've not
managed to make it retry more than five times before it succeeds.
B.
On Wed, May 26, 2010 at
-
From: Robert Newson robert.new...@gmail.com
Sent: Wed 26-05-2010 22:56
To: user@couchdb.apache.org;
Subject: Re: Re: Newbie question: compaction and mvcc consistency?
I succeeding in preventing compaction completion back in the 0.9 days
but I've been unable to reproduce since 0.10 onwards
that can't cope with the amount of writes?
-Original message-
From: Robert Newson robert.new...@gmail.com
Sent: Wed 26-05-2010 22:56
To: user@couchdb.apache.org;
Subject: Re: Re: Newbie question: compaction and mvcc consistency?
I succeeding in preventing compaction completion back
The reason couchdb-lucene requires you to write a javascript function
is that there is no single mapping from a couchdb document to a Lucene
Document that suits everyone.
B.
On Fri, Jun 4, 2010 at 10:31 PM, Norman Barker norman.bar...@gmail.com wrote:
Hi,
I am writing a clucene indexer for
and it will be good to do a comparison.
thanks,
Norman
On Fri, Jun 4, 2010 at 3:34 PM, Robert Newson robert.new...@gmail.com wrote:
The reason couchdb-lucene requires you to write a javascript function
is that there is no single mapping from a couchdb document to a Lucene
Document that suits everyone.
B
Nils,
I cut 0.5.2 today which includes the concurrency fix for direct querying.
B.
On Sat, Jun 5, 2010 at 10:09 PM, Nils Breunese n.breun...@vpro.nl wrote:
We had some serious performance problems with couchdb-lucene on a busy site
recently. It turned out the problem wasn't couchdb-lucene
1 - 100 of 1233 matches
Mail list logo