I am very happy with the way this discussion turned out!
Now I have written a tiny ruby script that updates the views for every tenth document update and at most once every second under heavy updates. Here on my development machine it uses about 1.2 mb of ram, and I can live with that.

I'll include it here in case somebody reads this discussion in the future and needs a similar script.

Thanks again for all the help and for a great database!

Best regards
Sebastian


**********************************

CONFIG: couch.ini:
add line:
DbUpdateNotificationProcess=/PATH/TO/view_updater.rb

**********************************

SCRIPT: view_updater.rb

#!/usr/bin/ruby

###
# CONF
###

# The smallest amount of changed documents before the views are updated
MIN_NUM_OF_CHANGED_DOCS = 10

# URL to the DB on the CouchDB server
# Only supports one database at the moment as that is what I need
# Should be easy to improve for several databases if needed
URL = "http://localhost:5984/kleio";

# Set the minimum pause between calls to the database
PAUSE = 1 # seconds

# One for each design document
VIEWS = ["feed/to_check",
         "feed_entries/list_for_user_by_feed",
         "subscription/number_of_subscriptions",
         "subscription_requests/request",
         "user/by_email",
         "couch_object_has_many_relations/related_documents"]




###
# "MAIN LOOP"
###

run = true
number_of_changed_docs = 0

threads = []

# Updates the views
threads << Thread.new do

  while run do

    if number_of_changed_docs >= MIN_NUM_OF_CHANGED_DOCS

      number_of_changed_docs = 0

      VIEWS.each do |view|
        `curl #{URL}/_view/#{view}?count=0`
      end

    end

    sleep PAUSE

  end

end

# Receives the update notification from CouchDB
threads << Thread.new do

  while run do

    update_call = gets

    # When CouchDB exits the script gets called with
    # a never ending series of nil
    if update_call == nil
      run = false
    else
      number_of_changed_docs += 1
    end

  end

end

# Good bye
threads.each {|thr| thr.join}

Reply via email to