MiguelSR opened a new issue #1033: Application start failure when running 
single node
URL: https://github.com/apache/couchdb/issues/1033
 
 
   Hello!
   
   I installed a 3-nodes cluster of CouchDB in 3 AWS instances. I got a problem 
with memory so I tried to change some settings of the instances (to be precise, 
add more RAM). But then I couldn't start my instances again. Now I'm back to 
the normal configuration and I can't run a single instance because I got the 
following errors in my log
   
   ```
   
   [error] 2017-11-30T14:23:29.219536Z [email protected] emulator -------- 
Error in process <0.289.0> on node '[email protected]' with exit value: 
{badarg,[{ets,member,[mem3_openers,<<14 
bytes>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,"src/mem3_shards.erl"},{line,486}]},{mem3_shards,load_shards_from_db,2,[{file,"src/mem3_shards.erl"},{line,389}]},{mem3_shards,load_shards_from_disk...
 
   
_shards,maybe_spawn_shard_writer,3,[{file,"src/mem3_shards.erl"},{line,486}]},{mem3_shards,load_shards_from_db,2,[{file,"src/mem3_shards.erl"},{line,389}]},{mem3_shards,load_shards_from_disk...
 
   [error] 2017-11-30T14:23:29.220107Z [email protected] <0.255.0> -------- 
Error opening view group `vistas` from database 
`shards/80000000-9fffffff/testclassifier.1511516580`: 
{'EXIT',{{badmatch,{badarg,[{ets,member,[mem3_openers,<<"testclassifier">>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,"src/mem3_shards.erl"},{line,486}]},{mem3_shards,load_shards_from_db,2,[{file,"src/mem3_shards.erl"},{line,389}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,378}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,407}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,96}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,38}]},{couch_index_server,'-get_index/3-fun-0-',2,[{file,"src/couch_index_server.erl"},{line,68}]}]}},[{couch_mrview,get_info,2,[{file,"src/couch_mrview.erl"},{line,332}]},{couch_compaction_daemon,maybe_compact_view,3,[{file,"src/couch_compaction_daemon.erl"},{line,253}]},{couch_compaction_daemon,maybe_compact_views,3,[{file,"src/couch_compaction_daemon.erl"},{line,227}]},{couch_compaction_daemon,'-compact_loop/1-fun-0-',3,[{file,"src/couch_compaction_daemon.erl"},{line,141}]},{couch_server,'-all_databases/2-fun-0-',4,[{file,"src/couch_server.erl"},{line,278}]},{filelib,do_fold_files2,8,[{file,"filelib.erl"},{line,184}]},{filelib,do_fold_files2,8,[{file,"filelib.erl"},{line,194}]},{couch_server,all_databases,2,[{file,"src/couch_server.erl"},{line,267}]}]}}
   
   [...]
   
   [notice] 2017-11-30T14:23:29.256223Z [email protected] <0.347.0> -------- 
Failed to ensure auth ddoc _users/_design/_auth exists for reason: read_failure
   
   [...]
   
   Crash dump was written to: erl_crash.dump
   Kernel pid terminated (application_controller) 
({application_start_failure,couch_peruser,{{shutdown,{failed_to_start_child,couch_peruser,{already_started,<0.214.0>}}},{couch_peruser_app,start,[norma
   
   
   ```
   
   The first message is seen once for each database.
   
   I'm running it via Docker, exposing my own /opt/couchdb/data and 
/opt/couchdb/etc. There are like 4 or 5 databases created (that before this 
problem worked perfectly).
   
   The problem seems to be about some file that can't be read, because of some 
permissions or lock issue, but I have no idea.
   
   Just to add some information, I created the cluster manually, not via 
Fauxton.
   
   I hope that someone can help me :).
   
   Thank you very much.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to