I've got CouchDB mostly working on my Raspberry Pi, simply via `apt-get couchdb` plus the permissions fix Jens posted about recently.
However, I can't get a particularly complex design document to finish its initial view generation. (See https://github.com/natevw/LocLog/tree/master/views especially https://github.com/natevw/LocLog/blob/master/views/by_utc/reduce.js for source code.) Originally I was getting explicit timeout errors, so after unsuccessfully trying more conservative values I cranked os_process_timeout to 9000000. This got it a lot farther, but now it seems stuck with no indication of what's going wrong except the server suddenly drops out before getting respawned: [Sat, 01 Sep 2012 04:55:55 GMT] [info] [<0.15090.1>] checkpointing view update at seq 2272 for loctest _design/loclog [Sat, 01 Sep 2012 05:00:01 GMT] [info] [<0.15090.1>] checkpointing view update at seq 2409 for loctest _design/loclog [Sat, 01 Sep 2012 05:09:49 GMT] [info] [<0.15090.1>] checkpointing view update at seq 2517 for loctest _design/loclog [Sat, 01 Sep 2012 05:14:46 GMT] [info] [<0.32.0>] Apache CouchDB has started on http://0.0.0.0:5984/ [Sat, 01 Sep 2012 05:19:50 GMT] [info] [<0.121.0>] 192.168.1.6 - - GET /_active_tasks 200 [Sat, 01 Sep 2012 05:19:55 GMT] [info] [<0.121.0>] 192.168.1.6 - - GET /_active_tasks 200 [Sat, 01 Sep 2012 05:20:00 GMT] [info] [<0.121.0>] 192.168.1.6 - - GET /_active_tasks 200 [Sat, 01 Sep 2012 05:20:05 GMT] [info] [<0.121.0>] 192.168.1.6 - - GET /_active_tasks 200 [Sat, 01 Sep 2012 05:20:10 GMT] [info] [<0.121.0>] 192.168.1.6 - - GET /_active_tasks 200 [Sat, 01 Sep 2012 05:20:15 GMT] [info] [<0.121.0>] 192.168.1.6 - - GET /_active_tasks 200 [Sat, 01 Sep 2012 05:20:55 GMT] [info] [<0.32.0>] Apache CouchDB has started on http://0.0.0.0:5984/ Any idea how to determine what could cause this, and/or if there's a remedy? My reduce function is rather float-heavy and I suspect perhaps the package build is using soft floats instead of hardware (not sure how to verify), but regardless the view made it this far and to see it simply fail without so much as a trace is a new one to me. I don't particularly suspect an out-of-memory condition — the whole database is <100MB (albeit snappy compressed) and this is spread across well over 5000 separate documents. thanks, -natevw
