Miles,
mesh-like architecture which uses CouchDB replication protocol to
distribute/aggregate data is perfectly viable. As well as other members of
the ML I also have several projects employing this approach. And like
others I also can not disclose any details, even qty of nodes involved; you
On 5/24/20 11:42 AM, Jan Lehnardt wrote:
On 24. May 2020, at 17:32, Miles Fidelman wrote:
Thanks Jan, and a follow-up, below:
On 5/24/20 4:51 AM, Jan Lehnardt wrote:
On 23. May 2020, at 18:57, Miles Fidelman wrote:
On 5/23/20 12:29 PM, Jan Lehnardt wrote:
On 23. May 2020, at 16:28,
> On 24. May 2020, at 17:32, Miles Fidelman wrote:
>
> Thanks Jan, and a follow-up, below:
>
> On 5/24/20 4:51 AM, Jan Lehnardt wrote:
>>
>>> On 23. May 2020, at 18:57, Miles Fidelman
>>> wrote:
>>>
>>> On 5/23/20 12:29 PM, Jan Lehnardt wrote:
>>>
> On 23. May 2020, at 16:28, Miles
Thanks Jan, and a follow-up, below:
On 5/24/20 4:51 AM, Jan Lehnardt wrote:
On 23. May 2020, at 18:57, Miles Fidelman wrote:
On 5/23/20 12:29 PM, Jan Lehnardt wrote:
On 23. May 2020, at 16:28, Miles Fidelman wrote:
On 5/22/20 11:13 AM, Jan Lehnardt wrote:
On 22. May 2020, at 15:06,
> On 23. May 2020, at 18:57, Miles Fidelman wrote:
>
> On 5/23/20 12:29 PM, Jan Lehnardt wrote:
>
>>
>>> On 23. May 2020, at 16:28, Miles Fidelman
>>> wrote:
>>>
>>>
>>> On 5/22/20 11:13 AM, Jan Lehnardt wrote:
>>>
> On 22. May 2020, at 15:06, Miles Fidelman
> wrote:
>
On 5/23/20 12:29 PM, Jan Lehnardt wrote:
On 23. May 2020, at 16:28, Miles Fidelman wrote:
On 5/22/20 11:13 AM, Jan Lehnardt wrote:
On 22. May 2020, at 15:06, Miles Fidelman wrote:
Hi Jan,
On 5/22/20 6:17 AM, Jan Lehnardt wrote:
Hi Miles,
I wanted to reply for a while, but struggled
> On 23. May 2020, at 16:28, Miles Fidelman wrote:
>
>
> On 5/22/20 11:13 AM, Jan Lehnardt wrote:
>
>>
>>> On 22. May 2020, at 15:06, Miles Fidelman
>>> wrote:
>>>
>>> Hi Jan,
>>>
>>> On 5/22/20 6:17 AM, Jan Lehnardt wrote:
Hi Miles,
I wanted to reply for a while, but
On 5/22/20 11:13 AM, Jan Lehnardt wrote:
On 22. May 2020, at 15:06, Miles Fidelman
wrote:
Hi Jan,
On 5/22/20 6:17 AM, Jan Lehnardt wrote:
Hi Miles,
I wanted to reply for a while, but struggled to find a good angle. I
think I finally figured out what I missed. I’m not sure I
> On 22. May 2020, at 15:06, Miles Fidelman wrote:
>
> Hi Jan,
>
> On 5/22/20 6:17 AM, Jan Lehnardt wrote:
>> Hi Miles,
>>
>> I wanted to reply for a while, but struggled to find a good angle. I think I
>> finally figured out what I missed. I’m not sure I understand your deployment
>>
Hi Jan,
On 5/22/20 6:17 AM, Jan Lehnardt wrote:
Hi Miles,
I wanted to reply for a while, but struggled to find a good angle. I think I
finally figured out what I missed. I’m not sure I understand your deployment
scenario.
When I think conference app, I think folks having that on their
Hi Miles,
We have not much experience with 5000 instances but think an ember-pouch
app would not have a problem with this. Take a look at
https://bloggr.exmer.com/
- Martin
On Wed, 20 May 2020 at 14:23, Miles Fidelman
wrote:
> Hi Folks,
>
> I'm thinking of using Couch (or Couch plus Pouch) as
Hi Miles,
I wanted to reply for a while, but struggled to find a good angle. I think I
finally figured out what I missed. I’m not sure I understand your deployment
scenario.
When I think conference app, I think folks having that on their mobile phones,
or tablets. Given that, you’d be using
On Jan 2, 2014, at 12:17 PM, Jan Lehnardt
j...@apache.orgmailto:j...@apache.org wrote:
We added /_db_updates in 1.4.0 that allows building the above with the
difference that a replication only runs for active users, thus delaying most of
the work until it is needed *and* avoiding having to
On Jul 11, 2013, at 9:25 AM, Bill Foshay bill.fos...@noteandgo.com wrote:
Ignoring filtering, is there any idea roughly how many persistent
replications can be running before it starts to hurt performance. I know
this is a vague question, highly dependent on the system resources of the
Robert Newson rnewson@... writes:
If you didn't
have filters at all, but still had n^2 replications, you've still got
a scaling problem, it's just not directly related to the filtering
overhead.
B.
Ignoring filtering, is there any idea roughly how many persistent
replications can be
Benoit Chesneau bchesneau@... writes:
The js evaluation against a lot of documents or with many requests can be
really slow. espcially when you start a replication on a large database.
This initial replication can take a long time. This is why the view
changes
has been added in rcouch.
Hi
Yup sounds right :)
- benoit
On Wed, Jul 10, 2013 at 6:25 PM, Bill Foshay bill.fos...@noteandgo.comwrote:
Benoit Chesneau bchesneau@... writes:
The js evaluation against a lot of documents or with many requests can be
really slow. espcially when you start a replication on a large
It's not true. Passing replication through a filter is a linear
slowdown (the cost of passing the document to spidermonkey for
evaluation), nothing more. Filtered replication is as
incremental/resumable as non-filtered replication.
Your scalability challenge is that the number of persistent
Not sure about recent builds, but older pre-1.2, had a weird timeout problem if
the time between processing a range of documents that were filtered vs not
filtered was too great. I think this was the heartbeat problem.
i.e. we had about 800k docs and in some ranges filtering would eliminate
On Jul 9, 2013, at 8:50 AM, Robert Newson rnew...@apache.org wrote:
It's not true. Passing replication through a filter is a linear
slowdown (the cost of passing the document to spidermonkey for
evaluation), nothing more. Filtered replication is as
incremental/resumable as non-filtered
The processing for the filter makes the underlying exponential growth
hurt sooner, yes, but I took the question as written. If you didn't
have filters at all, but still had n^2 replications, you've still got
a scaling problem, it's just not directly related to the filtering
overhead.
B.
On 9
On Jul 9, 2013, at 11:09 AM, Robert Newson rnew...@apache.org wrote:
If you didn’t have filters at all, but still had n^2 replications, you've
still got a scaling problem, it's just not directly related to the filtering
overhead.
Yes, I agree that CouchDB filtering is not significantly
Jens Alfke jens@... writes:
Yes, I agree that CouchDB filtering is not significantly higher-CPU than
not filtering :) and likely
cheaper if you include the savings from not transmitting the filtered-out
revisions.
But if you _do_ filter heavily, so any one client is seeing only a small
On Jul 9, 2013 8:43 PM, Jens Alfke j...@couchbase.com wrote:
On Jul 9, 2013, at 11:09 AM, Robert Newson rnew...@apache.org wrote:
If you didn’t have filters at all, but still had n^2 replications,
you've still got a scaling problem, it's just not directly related to the
filtering overhead.
Already had a look on BigCouch? http://bigcouch.cloudant.com/
- Mathias
On Jul 4, 2012, at 3:21 , Michael Parker wrote:
Given that CouchDB is a multi-master system, it seems that reads scale
gracefully while writes do not -- because N reads among k nodes can be
spread as N/k reads per node,
Hi Guys any one knows if bigcouch works on couchdb 1.2 ?
On Wed, Jul 4, 2012 at 3:11 AM, Mathias Leppich mlepp...@muhqu.de wrote:
Already had a look on BigCouch? http://bigcouch.cloudant.com/
- Mathias
On Jul 4, 2012, at 3:21 , Michael Parker wrote:
Given that CouchDB is a multi-master
IIRC it's currently built of v1.0.2
On Wednesday, 4 July 2012 at 13:49, Gabriel Mancini wrote:
Hi Guys any one knows if bigcouch works on couchdb 1.2 ?
On Wed, Jul 4, 2012 at 3:11 AM, Mathias Leppich mlepp...@muhqu.de
(mailto:mlepp...@muhqu.de) wrote:
Already had a look on
We've just moved away from BigCouch, it's definitely moved to the 1.1.x branch,
but not any further.
Martin
On Wednesday, 4 July 2012 at 14:11, Simon Metson wrote:
IIRC it's currently built of v1.0.2
On Wednesday, 4 July 2012 at 13:49, Gabriel Mancini wrote:
Hi Guys any one
I should note that BC (and cloudant.com, even more so) generally contain more
than the advertised couchdb release.
On 4 Jul 2012, at 14:11, Simon Metson wrote:
IIRC it's currently built of v1.0.2
On Wednesday, 4 July 2012 at 13:49, Gabriel Mancini wrote:
Hi Guys any one knows if
Thanks. U know if Will BE a update and when?
Em 04/07/2012 10:12, Simon Metson si...@cloudant.com escreveu:
IIRC it's currently built of v1.0.2
On Wednesday, 4 July 2012 at 13:49, Gabriel Mancini wrote:
Hi Guys any one knows if bigcouch works on couchdb 1.2 ?
On Wed, Jul 4, 2012 at
BigCouch will be merged into CouchDB this year, and merging the 1.1.1-1.2
changes is an important part of that.
B.
On 4 Jul 2012, at 14:55, Gabriel Mancini wrote:
Thanks. U know if Will BE a update and when?
Em 04/07/2012 10:12, Simon Metson si...@cloudant.com escreveu:
IIRC it's
On Fri, Apr 22, 2011 at 11:40 AM, Jim Klo jim@sri.com wrote:
I'm part of the core Federal Learning Registry dev team
[http://www.learningregistry.org], and we're using CouchDB to store and
replicate contents of the registry within our network.
One of the questions that has come up as we
2. Is there a strategy to for disk spanning to go beyond the 1TB limit by
incorporating multiple volumes or do we need to leverage a solution like
BigCouch which seems to require us to spin up multiple CouchDB's and do some
sort of sharding/partitioning of data? I'm curious on how queries
On Dec 15, 2009, at 12:58 AM, Matteo Caprari wrote:
The list function will need two consume iterators in the correct order.
Also, we may consider to include the view names in the url instead of
in the post body.\
That would make the call more explicit and consistent with the current api.
Ufff. Build breaking on Snow Leopard. Geee, miss linux for dev...
Anyway, thanks for the pointers, I'll join the dev list and get my act sorted :)
-teo
On Sun, Dec 13, 2009 at 4:52 PM, Jan Lehnardt j...@apache.org wrote:
Hi Matteo,
On 10 Dec 2009, at 11:53, Matteo Caprari wrote:
Hei chris,
I've started looking at the relevant code, this thing seems feasible.
The list function will need two consume iterators in the correct order.
Looking at the code in couch_httpd_show it seems at least complicated
to make interleaved calls possible (i.e in the list code starting the
Hi Matteo,
On 10 Dec 2009, at 11:53, Matteo Caprari wrote:
Interesting.
Let's say I'd like to implement that feature. Where would you start?
Is there a document like start hacking couchdb?
That's awesome! There are some docs on the wiki:
http://wiki.apache.org/couchdb/Development
They
On Tue, Dec 8, 2009 at 1:11 PM, Matteo Caprari matteo.capr...@gmail.com wrote:
Hi Kosta.
I'm trying to output an svg chart using a _list function, so no client
is consuming the view directly.
I could find the max and scale the data inside the list, but that
would consume quite a lot of
Interesting.
Let's say I'd like to implement that feature. Where would you start?
Is there a document like start hacking couchdb?
-teo
On Thu, Dec 10, 2009 at 7:44 PM, Chris Anderson jch...@apache.org wrote:
On Tue, Dec 8, 2009 at 1:11 PM, Matteo Caprari matteo.capr...@gmail.com
wrote:
Hi
On Tue, Dec 8, 2009 at 3:50 PM, Matteo Caprari matteo.capr...@gmail.com wrote:
Hello.
Given a database where each document represents a data point
(something like {date:'2001-01-01', amount:34000}),
I'd like to have a view where the numeric values are scaled to a number.
To scale the values
Hi!
To scale the values I'need to know the maximum point, and I think
it'not possible to do that with map/reduce.
You can find the maximum point using map/reduce. The division of all
point must then be done manually given that point, e.g. in a view. Of
course, doing it this way has
Hi Kosta.
I'm trying to output an svg chart using a _list function, so no client
is consuming the view directly.
I could find the max and scale the data inside the list, but that
would consume quite a lot of memory for big datasets,
unless it was possibe to reset the iterator...
-teo
On Tue,
42 matches
Mail list logo