What kind of load were you putting the machine on?

On 12 April 2012 17:24, Robert Newson <[email protected]> wrote:
> Could you show your vm.args file?
>
> On 12 April 2012 17:23, Robert Newson <[email protected]> wrote:
>> Unfortunately your request for help coincided with the two day CouchDB
>> Summit. #cloudant and the Issues tab on cloudant/bigcouch are other
>> ways to get bigcouch support, but we happily answer queries here too,
>> when not at the Model UN of CouchDB. :D
>>
>> B.
>>
>> On 12 April 2012 17:10, Mike Kimber <[email protected]> wrote:
>>> Looks like this isn't the right place based on the responses so far. Shame 
>>> I hoped this was going to help solve our index/view rebuild times etc.
>>>
>>> Mike
>>>
>>> -----Original Message-----
>>> From: Mike Kimber [mailto:[email protected]]
>>> Sent: 10 April 2012 09:20
>>> To: [email protected]
>>> Subject: BigCouch - Replication failing with Cannot Allocate memory
>>>
>>> I'm not sure if this is the correct place to raise an issue I am having 
>>> with replicating a standalone couchdb 1.1.1 to a 3 node BigCouch cluster? 
>>> If this is not the correct place please point me in the right direction if 
>>> it is then any one have any ideas why I keep getting the following error 
>>> message when I kick of a replication;
>>>
>>> eheap_alloc: Cannot allocate 1459620480 bytes of memory (of type "heap").
>>>
>>> My set-up is:
>>>
>>> Standalone couchdb 1.1.1 running on Centos 5.7
>>>
>>> 3 Node BigCouch cluster running on Centos 5.8 with the following local.ini 
>>> overrides pulling from the Standalone couchdb (78K documents)
>>>
>>> [httpd]
>>> bind_address = XXX.XX.X.XX
>>>
>>> [cluster]
>>> ; number of shards for a new database
>>> q = 9
>>> ; number of copies of each shard
>>> n = 1
>>>
>>> [couchdb]
>>> database_dir = /other/bigcouch/database
>>> view_index_dir = /other/bigcouch/view
>>>
>>> The error is always generate on the third node in the cluster and the 
>>> server basically max's out on memory before hand. The other nodes seem to 
>>> be doing very little, but are getting data i.e. the shard sizes are 
>>> growing. I've put the copies per shard down to 1 as currently I'm not 
>>> interested in resilience.
>>>
>>> Any help would be greatly appreciated.
>>>
>>> Mike
>>>

Reply via email to