I was able to fix the problem but at a cost. I have about 2500 databases for 
each of my users and I tried changing the file limit to 4096 but it wasn’t 
enough. I tried 65535 and it worked, no more errors. The following 
modifications made it possible:

I added “limit nofile 65535 65535” to the Ubuntu upstart file 
(/etc/init/couchdb.conf) after the author line:

# Apache CouchDB - a RESTful document oriented database

description "Start the system-wide CouchDB instance"
author "Jason Gerard DeRose <[email protected]>"
limit nofile 65535 65535

start on filesystem and static-network-up
stop on deconfiguring-networking
respawn

pre-start script
    mkdir -p /var/run/couchdb || /bin/true
    chown -R couchdb:couchdb /var/run/couchdb /etc/couchdb/local.*
end script

script
  HOME=/var/lib/couchdb
  export HOME
  chdir $HOME
  exec su couchdb -c /usr/bin/couchdb
end script

post-stop script
    rm -rf /var/run/couchdb/*
end script

Also, I changed the max open databases in /etc/couchdb/local.ini to 65535.

Unfortunately, now I am using WAY more CPU than before. Previously CouchDB was 
hovering around 3-4% and now it’s about 9-22%. This is on a testbed system 
where none of the user databases are being updated by real users.

I did a file count using this command:

lsof | grep couchdb | wc

And I learned that CouchDB has about 27000 files open.

This doesn’t seem to be a very scalable solution unless I’m missing something 
here. Maybe I should be using the update notifications to trigger one time 
replications each time a user’s DB is modified?

> On Jan 11, 2015, at 4:13 PM, Alexander Shorin <[email protected]> wrote:
> 
> CouchDB automatically closes unused file handlers. However, for 1000
> active databases it's hard to not hit the default 1024 limit.
> You can setup monitoring for couchdb/open_os_files and send you alert
> when it's getting close to the deadline.
> --
> ,,,^..^,,,
> 
> 
> On Mon, Jan 12, 2015 at 3:07 AM, Paul Okstad <[email protected]> wrote:
>> BTW, is there a better strategy to this instead of brute forcing the limit 
>> to be larger? It seems to be a bad idea to keep over 1000 files open if I 
>> don’t even need to replicate them until a change occurs. It this a 
>> limitation of internal continuous replication? Should I be triggering one 
>> time replications using the database update notifications?
>> 
>>> On Jan 11, 2015, at 3:44 PM, Paul Okstad <[email protected]> wrote:
>>> 
>>> Thank you for the quick reply. I am indeed using Ubuntu and indeed using 
>>> SSL so this is extremely relevant. I’ll try out the fixes and get back.
>>> 
>>>> On Jan 11, 2015, at 3:35 PM, Alexander Shorin <[email protected]> wrote:
>>>> 
>>>> On Mon, Jan 12, 2015 at 2:24 AM, Paul Okstad <[email protected]> wrote:
>>>>> {error,emfile}
>>>> 
>>>> emfile - too many open files. For thousand databases you might likely
>>>> hit default ulimit for 1024 file handlers.
>>>> See also this thread:
>>>> http://erlang.org/pipermail/erlang-questions/2015-January/082446.html
>>>> about other ways to solve this. For instance, on Ubuntu with upstart
>>>> there is a bit different way to set process limits.
>>>> 
>>>> --
>>>> ,,,^..^,,,
>>> 
>> 

Reply via email to