Hi,
Not great options out of the box, unfortunately.
1) the autoupdate property (true|false) in the design document itself, disables
"background" indexing.
2) ken.ignore config items would let you block building of databases by name.
3) disable ken entirely.
You'd also have to ensure users
to save doing further queries to
>> convert a list of IDs to users.
>>
>>> On 12 Nov 2023, at 17:24, Robert Newson wrote:
>>>
>>> chatgpt makes everything up. :)
>>>
>>> You can't fetch another document during the indexing callbacks.
ve?
>
>> On 11 Nov 2023, at 22:52, Robert Newson wrote:
>>
>> Hi,
>>
>> The problem is that getDoc() function doesn't exist, and so the evaluation
>> of this throws an error, which causes the document not to be indexed at all.
>>
>> B.
&g
Hi,
The problem is that getDoc() function doesn't exist, and so the evaluation of
this throws an error, which causes the document not to be indexed at all.
B.
> On 11 Nov 2023, at 17:30, TDAS wrote:
>
> Hey all
>
> I have Clouseau running, and have written a search index which is working
>
gt; Ok thanks, any tips on installing jdk 8 on Debian bullseye? I can’t find
>> anywhere with any suggestions for a version that early, apart from one which
>> says I need to sign up with Oracle! this is becoming a much bigger headache
>> than I had envisaged
>>
>
There's https://docs.couchdb.org/en/stable/install/search.html
You can use up to java 8 but nothing newer.
We dropped log4j btw, though clouseau only ever used log4j 1.x which was not
affected by the Log4Shell vulnerability. clouseau uses slf4j and you need to
choose which adapter you'd like.
rCtx, and secObj *but* for max power the verify function would also need
> to call/request other endpoints, for example, .length of GET all db with
> owner/author = userCtx.id/sub in order to limit db's per user.
>
> On Sat, Jul 8, 2023 at 2:41 PM Robert Newson wrote:
>
>>
Hi,
Currently there is no fine-grained read access controls within a database and
our advice is to separate documents into different databases to achieve this
level of control or, as you suggest, you can put such logic in an application
or proxy that mediates all access to couchdb.
Show
> On 14 Jun 2023, at 09:22, Luca Morandini wrote:
>
> On Wed, 14 Jun 2023 at 17:23, Robert Newson wrote:
>
>>
>> There are no votes, no elections and there are no leader nodes.
>>
>
> As I see it, when there is a quorum to reach there is an implicit
Hi,
There are no votes, no elections and there are no leader nodes.
CouchDB chooses availability over consistency and will accept reads/writes even
if only one node (that hosts the shard ranges being read/written) is up.
In a 3-node, 3-replica cluster, where every node hosts a copy of every
Hi,
The code is definitive:
https://github.com/apache/couchdb/blob/604526f5f93df28138a165a666e39ff37f3fdc06/src/mem3/src/mem3.erl#L391
n(DbName) div 2 + 1;
That is, (N/2) + 1, where (N/2) is rounded down to nearest integer.
For odd numbers of N (the only kind we recommend) the doc
Hi,
The bookmark encodes the "order" property of the last result from each shard
range, and a query with a bookmark parameter is simply retrieving matches that
come after those order values. If the database changes between queries
(documents added, changed or removed) such that the overall
It doesn't have to be. Couchdb and Clouseau communicate over Erlang RPC (the
same protocol the couchdb nodes use to talk to each other). You can specify the
Clouseau node name in the couchdb configuration. But do note that they are
still _paired_. Each couchdb node should be configured to talk
Hi,
The easiest approach would be to have haproxy send something else instead, but
note that some tools might break if they can't retrieve the welcome message.
I've confirmed that the replicator would not be affected. We welcome reports of
your success and/or issues you face by removing this.
you've copied the shard files over. You then
create the '_dbs' doc yourself. (Note that in big couch this database was
called "dbs").
B.
> On 8 Jul 2022, at 09:08, Luca Morandini wrote:
>
> On Fri, 8 Jul 2022 at 17:17, Robert Newson wrote:
>>
>> Hi,
>>
>&
Hi,
There's a bug in 3.1.0 that affects you. Namely that the default 5 second
gen_server timeout is used for some requests if ioq bypass is enabled. Please
check if your config has a [ioq.bypass] section and try again without bypasses
for a time.
If you could explain your migration process in
Hi Rick,
I think the explanation is straight-forward given your last comment. Indexes
are not replicated, they are only built locally. So that original error is
likely a timeout waiting for the index to build.
B.
> On 9 May 2022, at 21:16, Rick Jarvis wrote:
>
> It would appear it is
Hi,
Bintray went offline a while ago. Our official instructions and docs were
updated ahead of that to point to the new location for our binary artefacts;
At https://docs.couchdb.org/en/stable/install/index.html check the
"Installation using the Apache CouchDB convenience binary packages"
and
> add a custom field like timestamp?
>
> On Wed, Sep 8, 2021 at 11:28 PM Robert Newson wrote:
>
>> Hi,
>>
>> Unfortunately, no. CouchDB only stores what you put in and does not add
>> supplemental data like a timestamp. If you have the couchdb log you might
Hi,
Unfortunately, no. CouchDB only stores what you put in and does not add
supplemental data like a timestamp. If you have the couchdb log you might find
a record of the original PUT request, though.
B.
> On 8 Sep 2021, at 20:42, Sultan Dadakhanov wrote:
>
> Googled but unsuccessfully
>
>
> Best regards
> Paul
>
> On Thu, 19 Aug 2021 at 10:24, Robert Newson wrote:
>
>> Hi Paul,
>>
>> We welcome feedback on why the automatic compaction system (in its default
>> configuration or custom) is not appropriate for you.
>>
>> B.
>>
Hi Paul,
We welcome feedback on why the automatic compaction system (in its default
configuration or custom) is not appropriate for you.
B.
> On 19 Aug 2021, at 05:29, Paul Milner wrote:
>
> Hi Adam
>
> Thanks for the feedback. I was actually struggling with which options to set
> per
Just agreeing with all previous responses but would add that it might make
sense in your setup to put epmd under direct management (runit, systemd, etc)
and arrange for it to start before either service. And another note that if
epmd _crashes_ then existing nodes do not re-register (and that’s
Hi,
It’s worth remembering that the reason the new _rev is not available in your
_update handler is because the database update happens afterward, and thus the
value is not known. Indeed, it is not known if the update even succeeded (or
failed because couchdb crashed, or there was a
Hi,
I can confirm that Cloudant does not enable the proxy authentication handler
nor supports externalising authentication/authorization decisions in any other
way. Use either IBM IAM or the CouchDB _users database within your account
(note that the _users database option is not available for
_id is indeed unique across the nodes of the cluster but that isn't helpful to
your cause, because a document can have multiple, equally valid versions
(called "revisions" in couchdb terms).
In CouchDB 2.x and 3.x, and with a default "N" value of three, each of the
three nodes will accept a
>From 3.0 onward couchdb won’t even start unless there’s at least one admin
>configured.
--
Robert Samuel Newson
rnew...@apache.org
On Mon, 4 May 2020, at 22:20, Bill Stephenson wrote:
> Thank you Joan!
>
> It took me some time to figure our where those CouchDB config files are
> on my
Noting a) that replication only replicates latest revision, not the older ones.
b) compaction is not optional, you are strongly advised not to go this way.
--
Robert Samuel Newson
rnew...@apache.org
On Tue, 28 Apr 2020, at 11:46, Garren Smith wrote:
> I think it would be better to create a
It’s not clear what you’re reporting here.
Do you get a response or not?
If you do, please show it.
If not, check couch.log for output from that time and show that.
> On 16 Jan 2020, at 14:09, Betto McRose [icarus] wrote:
>
> Hi all
> I got this issue I can't figure out what I'm missing
> I
rns the previous
> revision of the doc, before the conflict happened?
>
>
> On 12/12/19 10:18 PM, Robert Newson wrote:
>> Overwrote, are you sure? Was there no other revision available?
>>
>> What should happen is that both versions of the document will be replica
Overwrote, are you sure? Was there no other revision available?
What should happen is that both versions of the document will be replicated to
both sides, and one of them (the same one) will be chosen as the "winner". The
other is always available until you delete it. Query with
Indeed puzzling.
If you delete the database (DELETE /dbname) and if this succeeds (2xx response)
then all of the db data is deleted fully. If you think you're seeing data
persisting after deletion you have a problem (the delete is failing, or you're
not really deleting the db, or something
Hi,
The most likely explanation is there is a document that you update frequently
that happens to land in the 8000-9fff shard range.
Noting that you did not need to delete and replace the file, we strongly
recommend against modifying database files directly, as compaction would have
Hi,
Eek. This queue should never get this big, it indicates that there is far too
much logging traffic generated and your target (file or syslog server) can't
take it. It looks like you have 'debug' level set which goes a long way to
explaining it. I would return to the default level of
nd adding more nodes for
> those shards to live on, at the expense of view, all_docs and changes
> requests becoming more expensive.
>
> > On 12. Mar 2019, at 08:08, Vladimir Ralev wrote:
> >
> > OK, I see. Thank you.
> >
> > On Mon, Mar 11, 20
3 times less performance from the cluster as a whole.
>
> If my understanding is correct, I imagine this would be a common use-case
> for couch?
>
> On Mon, Mar 11, 2019 at 4:58 PM Robert Newson wrote:
>
> > r and w are no longer configurable from the config file by design.
r and w are no longer configurable from the config file by design. The default
is n/2+1 (so 3 in your case) unless you specify r or w as request parameters.
setting n = 4 for a 4 node cluster is very unusual, do you really need 4 full
copies of your data?
couchdb will also automatically lower
Forcing clients to do short (<5s) requests feels like a general good, as
> >> long as meaningful things can be done in that time-frame, which I strongly
> >> believe from what we've said elsewhere that they can.
> >>
> >> That makes sense, but how would we do tha
Hi,
Given that option A is the behaviour of feed=continuous today (barring the
initial whole-snapshot phase to catch up to "now") I think that's the right
move. I confess to not reading your option B too deeply but I was there on IRC
when the first spark was lit. We can build some sort of
ike a little more complex than before...
> >
> > So the first thing I’ll try is copying the full /data directory (I need to
> > get this running now).
> >
> > Replication is a bit difficult if you cannot open ports and the dev
> > machines don’t have a fix
since 2.0 there is more to this than copying the dbname.couch file around. For
one thing, every database is now sharded, so you have several .couch files to
copy (even if you only have one node). So make sure you've copied them all and
kept their directory hierarchy. In addition there is a meta
Were the six missing documents newer on the target? That is, did you delete
them on the target and expect another replication to restore them?
Sent from my iPhone
> On 9 Mar 2017, at 22:08, Christopher D. Malon wrote:
>
> I replicated a database (continuously), but ended
Deleted docs return 404 when fetched, that's normal. If you're fetching an
older revision than the latest, it will also be missing if you've compacted the
database.
Sent from my iPhone
> On 24 Dec 2016, at 17:32, Ian Goodacre wrote:
>
> Hi all,
>
> I am running
ne data plan when I was getting the
> 403
>
> Sent from my iPhone
>
>> On Aug 25, 2016, at 3:09 AM, Robert Newson <rnew...@apache.org> wrote:
>>
>> Maybe you have a design doc with a validate_doc_update function that is
>> throwing "forbidden&quo
Maybe you have a design doc with a validate_doc_update function that is
throwing "forbidden".
Sent from my iPhone
> On 24 Aug 2016, at 23:47, herman...@gmail.com wrote:
>
> Hi there,
>
> Trying to delete a document and getting a 403 back. The delete is execute as
> a admin user, and I can
dev/run
Sent from my iPhone
> On 13 Aug 2016, at 19:19, Cihad Guzel wrote:
>
> Hi
>
> I want to use couchdb for my project testing. So I want to embed cocuhdb in
> my project. Then I run couchdb with my script programmaticaly and make
> test. After test, I stop couchdb
You'll need to do so on port 5986, the node-local interface.
Sent from my iPhone
> On 23 Jul 2016, at 07:15, Constantin Teodorescu <braila...@gmail.com> wrote:
>
>> On Sat, Jul 23, 2016 at 12:47 AM, Robert Newson <rnew...@apache.org> wrote:
>>
>> Are you up
Are you updating one doc over and over? That's my inference. Also you'll need
to run compaction on all shards then look at the distribution afterward.
Sent from my iPhone
> On 22 Jul 2016, at 21:02, Peyton Vaughn wrote:
>
> Hi,
>
> I've been working through getting a
urces. It's possible that I
> assumed they were starting over from seq 1 when in fact they were never able
> to complete a full replication in the first place.
>
> --
> Paul Okstad
>
>> On May 26, 2016, at 2:51 AM, Robert Newson <rnew...@apache.org> wrote:
>>
There must be something else wrong. Filtered replications definitely make and
resume from checkpoints, same as unfiltered.
We mix the filter code and parameters into the replication checkpoint id to
ensure we start from 0 for a potentially different filtering. Perhaps you are
changing those?
Recent Erlang versions make it possible to encrypt the rpc traffic. We don't
currently include those settings in the run scripts.
http://erlang.org/doc/apps/ssl/ssl_distribution.html
> On 26 Apr 2016, at 22:43, Oleg Cohen wrote:
>
> Greetings,
>
> I would like
emfile means you ran out of file descriptors.
> On 29 Mar 2016, at 05:04, Raja wrote:
>
> Hi Everyone,
>
> We seem to be getting a crash when loading a lot of records in a short
> interval into CouchDB. The crash details are available at:
>
It's definitely not supposed to run this way. You'll certainly corrupt your
databases if you allow two couchdb instances to write to the same files.
> On 3 Oct 2015, at 04:56, Dan Santner wrote:
>
> I think this is just not the way couch was meant to be used but….
>
> I
Definitely master, a lot of work has been done in the year(!) since the
preview.
> On 2 Oct 2015, at 12:14, Ying Bian wrote:
>
> OK. I think I would stay on master. Thanks,
>
> -Ying
>
>> On Oct 2, 2015, at 18:43, Alexander Shorin wrote:
>>
>> Hi,
>>
>>
It's an ignorable error caused by the code server scanning for .beam files
starting in current working dir. The init script should cd to somewhere that
couchdb can read, but does not. Using sudo must have a side effect of changing
cwd. I strongly advise returning to su but adding a cd call to
Default timeout in vhost module is a bug. 5s not long enough for that.
Sent from my iPhone
On 6 Aug 2014, at 12:48, Jason Woods de...@jasonwoods.me.uk wrote:
Hi all,
Hopefully someone can help shed some light on this. The logs aren't the
easiest thing to understand :(
CouchDB is
Sorry about that. Fixed on master.
Sent from my iPhone
On 2 Jul 2014, at 01:53, Nathan Vander Wilt nate-li...@calftrail.com wrote:
I am trying to set up CouchDB from a script, which makes a couch.ini config
file that includes this line:
[admins]
admin = password
On my local
Sounds like couchdb-1415
Sent from my iPhone
On 17 Jun 2014, at 12:34, kankanala karthik karthi...@beehyv.com wrote:
Hi All,
In the TAMA implementation, I came across an issue with Couchdb. (Version
1.2.0) ,
We are using named documents to maintain unique constraint logic in the
read _changes?descending=true row by row until you reach a non-design document?
The doc to ddoc ratio should be strongly in your favor.
B.
On 6 Jan 2014, at 22:00, Stanley Iriele siriele...@gmail.com wrote:
Could you do what Jens just mentioned and just make a filter?that way a
seq
It is relevant, the OP could use multiple databases to expose the subset of
documents to the appropriate subset of users.
Mentioning Couchbase is not relevant. :)
B.
On 2 Jan 2014, at 00:40, Jens Alfke j...@couchbase.com wrote:
On Jan 1, 2014, at 3:27 PM, Robert Newson rnew...@apache.org
behind a desk record / virtual host, that should do the
trick. The user that is used by the app is read only
Robert Newson rnew...@apache.org wrote:
there’s no notion of read-protection in CouchDB.
There’s no document level read protection, but you can certainly grant
or deny read access
there’s no notion of read-protection in CouchDB.
There’s no document level read protection, but you can certainly grant or deny
read access to users on a per database basis. That’s by design due to the ease
that information could leak out through views (particularly reduce views). The
Welcome!
On 1 Jan 2014, at 20:20, Simon Metson si...@cloudant.com wrote:
w00t!
On Wednesday, 1 January 2014 at 19:24, Dave Cottlehuber wrote:
Dear community,
There's nothing like starting off the New Year with a New Committer!!
I am pleased to announce that the CouchDB Project
I filed https://issues.apache.org/jira/browse/COUCHDB-2013 for this.
The patch will be a little more involved than just changing the prompt function
as the run method does not respect the timeout for many of its clauses. While
changing the gen_server call to infinity is an easy fix it removes
I've confirmed that the native view server honors that timeout, can
you tell me what;
curl localhost:5984/_config/couchdb/os_process_timeout
returns? You might need to bounce couchdb in any case, as it applies
this timeout setting when it creates the process, and we keep a pool
of them around,
PM, Robert Newson rnew...@apache.org wrote:
I've confirmed that the native view server honors that timeout, can
you tell me what;
curl localhost:5984/_config/couchdb/os_process_timeout
returns? You might need to bounce couchdb in any case, as it applies
this timeout setting when it creates
, Robert Newson rnew...@apache.org wrote:
couch_native_server has the set_timeout callback, though. I'll re-test
shortly.
B.
On 18 December 2013 18:17, Alexander Shorin kxe...@gmail.com wrote:
iirc native query server has hardcoded timeout 5000 and ignores
os_process_timeout setting
There is something hard coded in there and I will find it eventually
and find why it was put there and by whom.
This attitude might discourage people from helping you with your efforts.
B.
On 18 December 2013 22:33, david martin david.mar...@lymegreen.co.uk wrote:
On 18/12/13 18:05, Robert
emfile: you ran out of file descriptors.
B.
On 17 December 2013 21:02, Glen Aidukas gaidu...@behaviormatrix.com wrote:
Hello,
I am hoping someone knows what my issue might be. We recently migrated our
data from a couchdb v1.2 server over to a new build with more resources
running v1.5.
after migrating from couchdb v1.2 to v1.5
On Tue, Dec 17, 2013 at 3:27 PM, Robert Newson rnew...@apache.org wrote:
emfile: you ran out of file descriptors.
B.
Can this be solved with a bigger thesaurus? haha (sorry) --Matt
And, for posterity, you can check;
cat /proc/`pidof beam.smp`/limits
to check that it was applied.
B.
On 17 December 2013 22:11, Robert Newson rnew...@apache.org wrote:
Yup, thanks ubuntu/debian for that (longstanding annoyance). btw, It's
/etc/pam.d/su though, couchdb su's to the couchdb
Hi,
Add a property called since_seq to your second replication with the
update sequence you wish to start at. Like;
{source:source url here, target:target url, since_seq:9}
This was introduce in CouchDB 1.2.0;
* Added optional field `since_seq` to replication objects/documents.
It allows
I think your image\/png is just an artifact of your printing method,
you don't need to escape the forward slash in content_type, see
example below;
, which I can not relate to my input
data:
Exception Problems updating list of documents (length = 1): (500,
('badarg', '46'))
What does that '46' mean?
On Wed, Dec 11, 2013 at 1:47 PM, Robert Newson rnew...@apache.org wrote:
I think your image\/png is just an artifact of your printing method
base64 input?
B.
On 11 December 2013 13:11, Robert Newson rnew...@apache.org wrote:
http://json.org/string.gif talks escaping back slash, not forward
slash. The PDF page 194 talks about escaping forward slash within a
RegExp statement in Javascript, which is not JSON.
B.
On 11 December 2013 12
problem has been solved. I am not escaping anything in the
content_type: the json library is problably doing that. What I need to do
is to attach real base64 encoded data, which has solved my problem.
On Wed, Dec 11, 2013 at 2:15 PM, Robert Newson rnew...@apache.org wrote:
➜ ~ curl
Hi Michael,
This is the CouchDB user list and the
https://wiki.apache.org/couchdb/People_on_the_Couch page is for users of
CouchDB, not MongoDB.
B.
On 10 December 2013 13:03, Michael Giglhuber m.giglhu...@newelements.dewrote:
Hi all,
I would be glad, if you add me to the
Yeah, it only works on top level fields right now.
B.
On 9 December 2013 17:48, Stefan Klein st.fankl...@gmail.com wrote:
Sorry, hit send to fast. :(
2013/12/9 Stefan Klein st.fankl...@gmail.com
Hi couch users,
i got some application specific data in my user documents and have to make
The more tools the better, imo.
B.
On 9 December 2013 22:41, Skitsanos i...@skitsanos.com wrote:
Salut Dragos,
I guess you wasn't aware about Kanapes IDE (http://kanapeside.com), a fully
featured CouchDB IDE, made in Bucharest btw...
On Tuesday, December 10, 2013, Dragos Stoica wrote:
Brilliant!
Pull Requests for the features in your fork would be gratefully received
too.
On 8 Dec 2013 15:21, Marcello Barnaba v...@openssl.it wrote:
Hello list,
I have built a package of CouchDB-Lucene for OpenSuSE (11.4 ~ 13.1)
systems.
It is available on
CouchDB views are one dimensional so you will not succeed with a two
dimensional geo query. You could try couchdb-lucene which can.
On 8 Dec 2013 15:51, Qaqabincs luji...@gmail.com wrote:
I use a view to query an area, and emit [lng, lat] as key, so I use
...?startkey=[min_lng,
https://wiki.apache.org/couchdb/Replication#Named_Document_Replication ?
On 5 December 2013 08:10, Benoit Chesneau bchesn...@gmail.com wrote:
this not really possible diectly for now.
maybe copy to a new doc id, replicate this docid and delete on the source?
(why renaming on the other
To be clearer, startkey_docid is *ignored* unless you also specify startkey.
B.
On 5 December 2013 23:23, Robert Newson rnew...@apache.org wrote:
The question is meaningless, let me explain.
startkey_docid (and endkey_docid) are used for selecting ranges where
the view key is the same
The question is meaningless, let me explain.
startkey_docid (and endkey_docid) are used for selecting ranges where
the view key is the same, it is *not* a separate index. Views are in
key order only.
under the covers, the true view key is actually [emitted_key_order,
doc._id], the rows are
).
To get back to your use case, I'm assuming doc.user is not unique but,
somehow, you know the doc id of the user you're looking for? If so,
why not just use _all_docs?key=req.param.id and don't build the view
at all?
On 5 December 2013 23:23, Robert Newson rnew...@apache.org wrote
At your own risk. CouchDB makes no promise not to break reduce
functions that don't follow the rules, though we won't do it
capriciously.
B.
On 3 December 2013 18:00, Oliver Dain opub...@dains.org wrote:
Hi Robert,
Thanks very much for the reply. That makes sense.
I gather this means that
Because the order that we pass keys and values to the reduce function
is not defined. In sharded situations (like bigcouch, which is being
merged) an intermediate reduce value on an effectively random subset
of keys/values is generated at each node and a final rereduce is done
on all the
Odd, sounds like Futon is confused. Try clearing your browser cache
and reloading the page. (That or someone else is editing the document
in another window)
B.
On 29 November 2013 09:54, John Norris j...@norricorp.f9.co.uk wrote:
Just to add, I notice there is a 409 error in the logs - a
map:
emit(doc.created, doc.value);
reduce:
_stats
then query with startkey and endkey appropriately. This will give you
the sum of all values between the two keys and the number of rows.
Divide one by the other to derive mean average. This will work for
startkey/endkey's that span hours, days
What request would trigger this fold? What arguments would it take?
I'm not sure what's painful about the existing _bulk_docs read and
write API's, though exist primarily for bulk importing/exporting, most
database interactions are document or view level.
Since the word transaction was mentioned,
Views can be used to look up a specific key or a contiguous range of
keys, the original poster is wrong to think that each item in the view
is separately queryable.
That said, [600,69] is greater than [400,50] and less than [1000, 100]
and so should be returned, even in 1.0.4.
B.
On 25
.
On Fri, Nov 22, 2013 at 3:54 PM, Robert Newson rnew...@apache.org wrote:
Yup, we know. The start/stop code is quite complicated (*too*
complicated) and seems to go wrong more and more.
Jan and I are going to spend some time digging into it over the weekend.
The main issue is that the pid
between 1.3 and 1.4/1.5, so I hope it's still relevant.
hth,
Mike
On Fri, Nov 22, 2013 at 3:57 PM, Robert Newson rnew...@apache.org wrote:
That would be great!
On 22 November 2013 14:55, Mike Marino mmar...@gmail.com wrote:
I have definitely had a similar issue, and had to fix the script
_bulk_docs requires a different format input than _all_docs. You can't pipe
one to the other.
On 22 Nov 2013 21:22, Andy Wenk a...@nms.de wrote:
Hi Sreedhar,
On 21 November 2013 14:51, Sreedhar P V venkatasridha...@gmail.com
wrote:
Hi Team,
I am using couchdb for one of my projects
AM, Robert Newson wrote:
asn1 comes from your erlang install, we don't ship it, but it implies
you're missing standard parts of erlang. I'm assuming debian or
ubuntu, therefore apt-get install erlang-asn1 and probably others.
The policy that forces package maintainers to subdivide erlang because
Hi,
There has been a protracted lull in the bigcouch merger work but we're
doing some more at couchhack in December and then a whole lot more in
Q1, hopefully to completion.
We're not yet sure what migration will look like. At worse, it will be
replication based but we're mindful to do better.
{app_would_not_start,asn1} is pretty telling.
try 'erl' then application:start(asn1). and see what error you get.
If it's a not_started for some other app, try starting that one.
You'll probably do this a few times before finding the thing that
fails to start. Likely, it will be one that requires
1) a stop the world lock when writing to disk
There's no such thing in couchdb. Databases are append-only, there's a
single writer, but concurrent PUT/POST requests are faster than serial
anyway, and each writes to different databases are fully independent.
2) Stack traces are hard to read, not
I guess this was released from moderation by someone that didn't see
your other email after you subscribed, let's consider this thread
dead?
B.
On 19 November 2013 21:16, Diogo Moitinho de Almeida diogo...@gmail.com wrote:
Hello,
Based on the research that I've done, CouchDB seems like a
A write requires updating views and reads have to wait for the update
Is not true. Database writes are not coupled to view updates.
Sent from my iPad
On 20 Nov 2013, at 20:59, Mark Hahn m...@reevuit.com wrote:
A write requires updating views and reads have
to wait for the update
1 - 100 of 1233 matches
Mail list logo