sklassen closed pull request #37: New Instructions for installing snap 
including enabling interfaces
URL: https://github.com/apache/couchdb-pkg/pull/37
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/snap/BUILD.md b/snap/BUILD.md
new file mode 100644
index 0000000..694b0af
--- /dev/null
+++ b/snap/BUILD.md
@@ -0,0 +1,30 @@
+# Building snaps
+
+## Prerequisites
+
+CouchDB requires Ubuntu 16.04. If building on 18.04, then LXD might be useful.
+
+1. `lxc launch ubuntu:16.04 couchdb-pkg`
+1. `lxc exec couchdb-pkg bash`
+1. `sudo apt update`
+1. `sudo apt install snapd snapcraft`
+
+1. `git clone https://github.com/couchdb/couchdb-pkg.git`
+1. `cd couchdb-pkg`
+
+## How to do it
+
+1. Edit `snap/snapcraft.yaml` to point to the correct tag (e.g. `2.2.0`)
+1. `snapcraft`
+
+## Instalation
+
+You may need to pull the LXD file to the host system.
+
+    $ lxc file pull couchdb-pkg/root/couchdb-pkg/couchdb_2.2.0_amd64.snap 
/tmp/couchdb_2.2.0_amd64.snap
+
+The self crafted snap will need to be installed in devmode
+
+    $ sudo snap install /tmp/couchdb_2.2.0_amd64.snap --devmode
+
+
diff --git a/snap/HOWTO.md b/snap/HOWTO.md
index 21124a5..62a9831 100644
--- a/snap/HOWTO.md
+++ b/snap/HOWTO.md
@@ -1,109 +1,107 @@
 # HOW TO install a cluster using snap
 
-# Create three machines
-
-In the instruction below, we are going to set up a three -- the miniumn number 
needed to gain performace improvement -- Couch cluster database. In this potted 
example we will be using LXD.
-
-We launch a new container and install couchdb on one machine
-
-1. localhost> `lxc launch ubuntu:18.04 couchdb-c1`
-1. localhost> `lxc exec couchdb-c1 bash`
-1. couchdb-c1> `apt update`
-1. couchdb-c1> `snap install couchdb`
-1. couchdb-c1> `logout`
-
-Here we use LXD copy function to speed up the test
-```
-lxc copy couchdb-c1 couchdb-c2
-lxc copy couchdb-c1 couchdb-c3
-lxc copy couchdb-c1 cdb-backup
-lxc start couchdb-c2
-lxc start couchdb-c3
-lxc start cdb-backup
-```
-
-# Configure CouchDB (using the snap tool)
-
-We are going to need the IP addresses. You can find them here.
-```
-lxc list
-```
-
-Now lets use the snap configuration tool to set the configuration files.
-```
-lxc exec couchdb-c1 snap set couchdb name=couchdb@10.210.199.199 
setcookie=monster admin=password bind-address=0.0.0.0
-lxc exec couchdb-c2 snap set couchdb name=couchdb@10.210.199.254 
setcookie=monster admin=password bind-address=0.0.0.0
-lxc exec couchdb-c3 snap set couchdb name=couchdb@10.210.199.24 
setcookie=monster admin=password bind-address=0.0.0.0
-```
-The backup machine we will leave as a single instance and no sharding. 
-```
-lxc exec cdb-backup snap set couchdb name=couchdb@127.0.0.1 setcookie=monster 
admin=password bind-address=0.0.0.0 n=1 q=1
-```
-
-The snap must be restarted for the new configurations to take affect. 
-```
-lxc exec couchdb-c1 snap restart couchdb
-lxc exec couchdb-c2 snap restart couchdb
-lxc exec couchdb-c3 snap restart couchdb
-lxc exec cdb-backup snap restart couchdb
+## Create three nodes
+
+In the example below, we are going to set up a three node CouchDB cluster. 
(Three is the minimum number needed to support clustering features.) We'll also 
set up a separate, single machine for making backups. In this example we will 
be using LXD.
+
+We launch a (single) new container, install couchdb via snap from the store 
and enable interfaces.
+```bash
+  1. localhost> lxc launch ubuntu:18.04 couchdb-c1
+  1. localhost> lxc exec couchdb-c1 bash
+  1. couchdb-c1> apt update
+  1. couchdb-c1> snap install couchdb --edge
+  1. couchdb-c1> snap connect couchdb:mount-observe
+  1. couchdb-c1> snap connect couchdb:process-control
+  1. couchdb-c1> logout
+```
+Back on localhost, we can then use the LXD copy function to speed up 
installation:
+```bash
+  $ lxc copy couchdb-c1 couchdb-c2
+  $ lxc copy couchdb-c1 couchdb-c3
+  $ lxc copy couchdb-c1 cdb-backup
+  $ lxc start couchdb-c2
+  $ lxc start couchdb-c3
+  $ lxc start cdb-backup
+```
+## Configure CouchDB using the snap tool
+
+We are going to need the IP addresses:
+```bash
+  $ lxc list
+```
+Now, again from localhost, and using the `lxc exec` commond, we will use the 
snap configuration tool to set the 
+various configuration files.
+```bash
+  $ lxc exec couchdb-c1 snap set couchdb name=couchdb@10.210.199.199 
setcookie=monster admin=Be1stDB bind-address=0.0.0.0
+  $ lxc exec couchdb-c2 snap set couchdb name=couchdb@10.210.199.254 
setcookie=monster admin=Be1stDB bind-address=0.0.0.0
+  $ lxc exec couchdb-c3 snap set couchdb name=couchdb@10.210.199.24 
setcookie=monster admin=Be1stDB bind-address=0.0.0.0 
+```
+The backup machine we will configure as a single instance (n=1) and a single 
shard (q=1). 
+```bash
+  $ lxc exec cdb-backup snap set couchdb name=couchdb@127.0.0.1 
setcookie=monster admin=Be1stDB bind-address=0.0.0.0 n=1 q=1
+```
+Each snap must be restarted for the new configurations to take affect. 
+```bash
+  $ lxc exec couchdb-c1 snap restart couchdb
+  $ lxc exec couchdb-c2 snap restart couchdb
+  $ lxc exec couchdb-c3 snap restart couchdb
+  $ lxc exec cdb-backup snap restart couchdb
 ```
 The configuration files are stored here.
-```
-lxc exec cdb-backup cat /var/snap/couchdb/current/etc/vm.args
-lxc exec cdb-backup cat /var/snap/couchdb/current/etc/local.d/*
+```bash
+  $ lxc exec cdb-backup cat /var/snap/couchdb/current/etc/vm.args
+  $ lxc exec cdb-backup cat /var/snap/couchdb/current/etc/local.d/*
 ```
 Any changes to couchdb from the http configutation tool are made here
-```
-lxc exec cdb-backup cat /var/snap/couchdb/current/etc/local.d/local.ini
+```bash
+  $ lxc exec cdb-backup cat /var/snap/couchdb/current/etc/local.ini
 ```
 
-# Configure CouchDB Cluster (using the http interface)
+## Configure CouchDB Cluster (using the http interface)
 
-Now we set up the cluster via the http front-end. This only needs to be run 
once on the first machine. The last command syncs with the other nodes and 
creates the standard databases.
-```
-curl -X POST -H "Content-Type: application/json" 
http://admin:password@10.210.199.199:5984/_cluster_setup -d '{"action": 
"add_node", "host":"10.210.199.254", "port": "5984", "username": "admin", 
"password":"password"}'
-curl -X POST -H "Content-Type: application/json" 
http://admin:password@10.210.199.199:5984/_cluster_setup -d '{"action": 
"add_node", "host":"10.210.199.24", "port": "5984", "username": "admin", 
"password":"password"}'
-curl -X POST -H "Content-Type: application/json" 
http://admin:password@10.210.199.199:5984/_cluster_setup -d '{"action": 
"finish_cluster"}'
+Now we set up the cluster via the http front-end. This only needs to be run 
once on the first machine. The last command 
+syncs with the other nodes and creates the standard databases.
+```bash
+  $ curl -X POST -H "Content-Type: application/json" 
http://admin:Be1stDB@10.210.199.199:5984/_cluster_setup -d '{"action": 
"add_node", "host":"10.210.199.254", "port": "5984", "username": "admin", 
"password":"Be1stDB"}'
+  $ curl -X POST -H "Content-Type: application/json" 
http://admin:Be1stDB@10.210.199.199:5984/_cluster_setup -d '{"action": 
"add_node", "host":"10.210.199.24", "port": "5984", "username": "admin", 
"password":"Be1stDB"}'
+  $ curl -X POST -H "Content-Type: application/json" 
http://admin:Be1stDB@10.210.199.199:5984/_cluster_setup -d '{"action": 
"finish_cluster"}'
 ```
 Now we have a functioning three node cluster. 
 
-# An Example Database
+## An Example Database
 
 Let's create an example database ...
+```bash
+  $ curl -X PUT http://admin:password@10.210.199.199:5984/example
+  $ curl -X PUT http://admin:password@10.210.199.199:5984/example/aaa -d 
'{"test":1}' -H "Content-Type: application/json"
+  $ curl -X PUT http://admin:password@10.210.199.199:5984/example/aab -d 
'{"test":2}' -H "Content-Type: application/json"
+  $ curl -X PUT http://admin:password@10.210.199.199:5984/example/aac -d 
'{"test":3}' -H "Content-Type: application/json"
 ```
-curl -X PUT http://admin:password@10.210.199.199:5984/example
-curl -X PUT http://admin:password@10.210.199.199:5984/example/aaa -d 
'{"test":1}' -H "Content-Type: application/json"
-curl -X PUT http://admin:password@10.210.199.199:5984/example/aab -d 
'{"test":2}' -H "Content-Type: application/json"
-curl -X PUT http://admin:password@10.210.199.199:5984/example/aac -d 
'{"test":3}' -H "Content-Type: application/json"
-```
-... And see that it is sync'd accross the three nodes.
-```
-curl -X GET http://admin:password@10.210.199.199:5984/example/_all_docs
-curl -X GET http://admin:password@10.210.199.254:5984/example/_all_docs
-curl -X GET http://admin:password@10.210.199.24:5984/example/_all_docs
+... And see that it is created on all three nodes.
+```bash
+  $ curl -X GET http://admin:password@10.210.199.199:5984/example/_all_docs
+  $ curl -X GET http://admin:password@10.210.199.254:5984/example/_all_docs
+  $ curl -X GET http://admin:password@10.210.199.24:5984/example/_all_docs
 ```
-# Backing Up CouchDB
+## Backing Up CouchDB
 
-Our back up server is on 10.210.199.242. We will manually replicate this from 
one (anyone) of the nodes.
+Our backup server is on 10.210.199.242. We will manually replicate to this 
from one (can be any one) of the nodes.
+```bash
+  $ curl -X POST http://admin:Be1stDB@10.210.199.242:5984/_replicate -d 
'{"source":"http://10.210.199.199:5984/example";, "target":"example", 
"continuous":false,"create_target":true}' -H "Content-Type: application/json"
+  $ curl -X GET http://admin:Be1stDB@10.210.199.242:5984/example/_all_docs
 ```
-curl -X POST http://admin:password@10.210.199.242:5984/_replicate -d 
'{"source":"http://10.210.199.199:5984/example";, "target":"example", 
"continuous":false,"create_target":true}' -H "Content-Type: application/json"
-curl -X GET http://admin:password@10.210.199.242:5984/example/_all_docs
+Whereas the data store for the clusters nodes is sharded:
+```bash
+  $ lxc exec couchdb-c1 ls /var/snap/couchdb/common/data/shards/
 ```
-The data store for the clusters nodes are sharded 
-```
-lxc exec couchdb-c1 ls /var/snap/couchdb/common/2.x/data/shards/
-```
-
-The backup database is a single file.
-```
-lxc exec cdb-backup ls 
/var/snap/couchdb/common/2.x/data/shards/00000000-ffffffff/
+The backup database is a single directory:
+```bash
+  $ lxc exec cdb-backup ls /var/snap/couchdb/common/data/shards/
 ```
 
-# Monitoring CouchDB 
-
-The logs, by default, are captured by journald
-```
-lxc exec couchdb-c1 bash
-journalctl -u snap.couchdb -f
-```
+## Monitoring CouchDB 
 
+The logs, by default, are captured by journald. First connect to the node in 
question:
+  `$ lxc exec couchdb-c1 bash`
+Then, show logs as usual. couchdb is likely prefixed with 'snap'
+  `$ journalctl -u snap.couchdb -f`
diff --git a/snap/README.md b/snap/README.md
index 65ce54e..9f32fb4 100644
--- a/snap/README.md
+++ b/snap/README.md
@@ -1,61 +1,84 @@
-# Building snaps
-
-## Prerequisites
-
-CouchDB requires Ubuntu 16.04. If building on 18.04, then LXD might be useful. 
-
-1. `lxc launch ubuntu:16.04 couchdb-pkg`
-1. `lxc exec couchdb-pkg bash`
-1. `sudo apt update`
-1. `sudo apt install snapd snapcraft`
-
-1. `git clone https://github.com/couchdb/couchdb-pkg.git`
-1. `cd couchdb-pkg`
-
-## How to do it
+# Snap Instalation
 
-1. Edit `snap/snapcraft.yaml` to point to the correct tag (e.g. `2.2.0`)
-1. `snapcraft`
+## Downloading from the snap store
 
-# Snap Instalation
+The snap can be installed from a file or directly from the snap store. It is, 
for the moment, listed in the edge channel.
 
-You may need to pull the LXD file to the host system.
+```
+    $ sudo snap install couchdb --edge
+```  
+## Enable snap permissions
 
-    $ lxc file pull couchdb-pkg/root/couchdb-pkg/couchdb_2.2.0_amd64.snap 
/tmp/couchdb_2.2.0_amd64.snap
+The snap installation uses AppArmor to protect your system. CouchDB requests 
access to two interfaces: mount-observe, which
+is used by the disk compactor to know when to initiate a cleanup; and 
process-control, which is used by the indexer to set
+the priority of couchjs to 'nice'. These two interfaces, while not required, 
are useful. If they are not enabled, CouchDB will
+still run, but you will need to run the compactor manually and couchjs may put 
a heavy load on the system when indexing. 
 
-The self crafted snap will need to be installed in devmode
+To connect the interfaces type:
+   ```
+   $ sudo snap connect couchdb:mount-observe
+   $ sudo snap connect couchdb:process-control
+   ```
+## Snap configuration
 
-    $ sudo snap install /tmp/couchdb_2.2.0_amd64.snap --devmode 
+There are two levels of hierarchy within couchdb configuration. 
 
-# Snap Configuration
+The default layer is stored in /snap/couchdb/current/rel/couchdb/etc/ and the 
default.ini is
+referred to before the default.d directory. In the snap installation this is 
mounted read-only.
 
-There are two levels of erlang and couchdb configuration hierarchy. 
+The local layer is stored in /var/snap/couchdb/current/etc/ on the writable 
/var mount. 
+Within this second layer, configurations are set local.ini (single file 
co-mingled sections) or 
+the local.d directory (one file per section). Configuration management tools 
(like puppet, chef, 
+ansible, salt) often use the former (local.ini).  The "snap set" command works 
within the 
+latter (local.d). Entries in local.d supersede those in the local.ini 
directory. 
 
-The default layer is stored in /snap/couchdb/current/rel/couchdb/etc/ and is 
read only. 
-The user override layer, is stored in /var/snap/couchdb/current/etc/ and is 
writable. 
-Within this second layer, configurations are set with the local.d directory 
(one file 
-per section) or the local.ini (co-mingled). The "snap set" command works with 
the 
-former (local.d) and couchdb http configuration overwrites the latter 
(local.ini). 
-Entries in local.ini supersede those in the local.d directory.
+Changes made via the http configuration will write into 
local.d/90-override.ini.
 
 The name of the erlang process and the security cookie used is set in vm.args 
file.
-This can be set through the snap native configuration. For example, when 
setting up 
+This should be set through the snap native configuration. For example, when 
setting up 
 a cluster over several machines the convention is to set the erlang 
-name to couc...@your.ip.address. Both erlang and couchdb configuration changes 
can be 
-made at the same time.
+name to couc...@your.ip.address. 
+
+Both erlang and couchdb configuration changes can be made at the same time.
 
+```
     $ sudo snap set couchdb name=couchdb@216.3.128.12 setcookie=cutter 
admin=Be1stDB bind-address=0.0.0.0
+```
 
 Snap set variable can not contain underscore character, but any dashes are 
converted to underscore when
-writing to file. Wrap double quotes around any bracets and avoid spaces.
-
-    $ sudo snap set couchdb delayed-commits=true 
erlang="{couch_native_process,start_link,[]}"
+writing to file. Wrap double quotes around any brackets or spaces. 
 
+```
+   $ sudo snap set couchdb erlang="{couch_native_process,start_link,[]}"
+```
 Snap Native Configuration changes only come into effect after a restart
-    
+
+```
     $ sudo snap restart couchdb
+```
+Snap Native Configuration has only been enabled for a few options essential to 
inital installation or items 
+that are not white-listed for configuration over HTTP.  Other options that can 
be set via snap are: CHTTPD's "port";
+Cluster options: "n", "q"; the log options "writer","file","level". And the 
Native Query Servers 
+Options "query" and "erlang". 
+
+Other options can be set via configuration over HTTP, as below.
+
+```
+    $ curl -X PUT 
http://admin:Be1stDB@216.3.128.12:5984/_node/_local/_config/couchdb/delayed-commits
 -d '"true"'
+```
+Which has the advantage of not requiring restarting the application. 
+
+For anything not covered by snap set or the configuration over http, you can 
edit 
+the /var/snap/couchdb/current/etc files by hand. 
+
+## Example Cluster
+
+See the [HOWTO][1] file to see an example of a three node cluster and further 
notes. 
+
+## Building a Private Snap
 
-# Example Cluster
+If you want to build your own snap file from source see the [BUILD][2] for 
instructions.
 
-See the HOWTO.md file to see an example of a three node cluster.
+[1]: HOWTO.md
+[2]: BUILD.md
 
diff --git a/snap/meta/hooks/configure b/snap/meta/hooks/configure
index 8c2b1aa..4b29008 100755
--- a/snap/meta/hooks/configure
+++ b/snap/meta/hooks/configure
@@ -20,10 +20,11 @@ _modify_vm_args() {
 }
 
 _modify_ini_file() {
-  section=$1
-  opt=`echo $2 | tr "-" "_"`
-  value="$3"
-  config_file=${LOCAL_DIR}/${section}.ini
+  filename=$1
+  section=$2
+  opt=`echo $3 | tr "-" "_"`
+  value="$4"
+  config_file=${LOCAL_DIR}/${filename}.ini
   if [ ! -e ${config_file} ]; then
     echo "[${section}]" > $config_file
   fi
@@ -56,123 +57,57 @@ done
 
 # Special Cases
 
-# local.d/admins.ini
+# local.d/10-admins.ini
 passwd=$(snapctl get admin)
 if [ ! -z "$passwd" ]; then
-   _modify_ini_file admins admin $passwd
-   chmod 600 $SNAP_DATA/etc/local.d/admins.ini
-   sleep 0.125
-fi
-
-# local.d/ssl.ini
-port=$(snapctl get ssl-port)
-if [ ! -z "$port" ]; then
-   _modify_ini_file ssl port $port
-   sleep 0.125
-fi
-
-# local.d/httpd.ini
-port=$(snapctl get httpd-port)
-if [ ! -z "$port" ]; then
-   _modify_ini_file httpd port $port
-   sleep 0.125
-fi
-
-# local.d/chttpd.ini
-port=$(snapctl get chttpd-port)
-if [ ! -z "$port" ]; then
-   _modify_ini_file chttpd port $port
+   _modify_ini_file 10-admins admins admin $passwd
+   chmod 600 $SNAP_DATA/etc/local.d/10-admins.ini
    sleep 0.125
 fi
 
 # Generic Cases
 
-# local.d/chttpd.ini
-CHTTPD_OPTIONS="port bind-address require-valid-user"
+# local.d/20-chttpd.ini
+CHTTPD_OPTIONS="port bind-address"
 for key in $CHTTPD_OPTIONS
 do
   val=$(snapctl get $key)
   if [ ! -z "$val" ]; then
-    _modify_ini_file chttpd $key $val
+    _modify_ini_file 20-chttpd chttpd $key $val
     sleep 0.125
   fi
 done
 
-# local.d/cluster.ini
+# local.d/30-cluster.ini
 CLUSTER_OPTIONS="n q"
 for key in $CLUSTER_OPTIONS
 do
   val=$(snapctl get $key)
   if [ ! -z "$val" ]; then
-    _modify_ini_file cluster $key $val
-    sleep 0.125
-  fi
-done
-
-# local.d/compaction_daemon.ini
-COMPACTION_DAEMON_OPTIONS="check-interval"
-for key in $COMPACTION_DAEMON_OPTIONS
-do
-  val=$(snapctl get $key)
-  if [ ! -z "$val" ]; then
-    _modify_ini_file compaction_daemon $key $val
-    sleep 0.125
-  fi
-done
-
-# local.d/couchdb.ini
-COUCHDB_OPTIONS="database-dir view-index-dir delayed-commits"
-for key in $COUCHDB_OPTIONS
-do
-  val=$(snapctl get $key)
-  if [ ! -z "$val" ]; then
-    _modify_ini_file couchdb $key $val
+    _modify_ini_file 30-cluster cluster $key $val
     sleep 0.125
   fi
 done
 
-# local.d/log.ini
+# local.d/40-log.ini
 LOG_OPTIONS="writer file level"
 for key in $LOG_OPTIONS
 do
   val=$(snapctl get $key)
   if [ ! -z "$val" ]; then
-    _modify_ini_file log $key $val
+    _modify_ini_file 40-log log $key $val
     sleep 0.125
   fi
 done
 
-# local.d/native_query_servers.ini
+# local.d/50-native_query_servers.ini
 NATIVE_QUERY_SERVERS_OPTIONS="query erlang"
 for key in $NATIVE_QUERY_SERVERS_OPTIONS
 do
   val=$(snapctl get $key)
   if [ ! -z "$val" ]; then
-    _modify_ini_file native_query_servers $key $val
+    _modify_ini_file 50-native_query_servers native_query_servers $key $val
     sleep 0.125
   fi
 done
 
-# local.d/couch_peruser.ini
-COUCH_PERUSER_OPTIONS="database-prefix delete-dbs enable"
-for key in $COUCH_PERUSER_OPTIONS
-do
-  val=$(snapctl get $key)
-  if [ ! -z "$val" ]; then
-    _modify_ini_file couch_peruser $key $val
-    sleep 0.125
-  fi
-done
-
-# local.d/uuids.ini
-UUIDS_OPTIONS="algorithm max-count"
-for key in $UUIDS_OPTIONS
-do
-  val=$(snapctl get $key)
-  if [ ! -z "$val" ]; then
-    _modify_ini_file uuids $key $val
-    sleep 0.125
-  fi
-done
-
-
diff --git a/snap/meta/hooks/install b/snap/meta/hooks/install
index eb7a541..28a5f64 100755
--- a/snap/meta/hooks/install
+++ b/snap/meta/hooks/install
@@ -2,8 +2,14 @@
 
 mkdir -p ${SNAP_DATA}/etc/local.d 
 
-cp ${SNAP}/rel/couchdb/etc/vm.args ${SNAP_DATA}/etc/vm.args
+if [ ! -f ${SNAP_DATA}/etc/vm.args ]; then
+   cp ${SNAP}/rel/couchdb/etc/vm.args ${SNAP_DATA}/etc/vm.args
+fi
 
-cp ${SNAP}/rel/couchdb/etc/local.d/*.ini ${SNAP_DATA}/etc/local.d
+if [ ! -f ${SNAP_DATA}/etc/local.ini ]; then
+   cp ${SNAP}/rel/couchdb/etc/local.ini ${SNAP_DATA}/etc/local.ini
+fi
+
+touch ${SNAP_DATA}/etc/local.d/90-override.ini
 
 
diff --git a/snap/snap_run b/snap/snap_run
index 5fa783a..e608086 100755
--- a/snap/snap_run
+++ b/snap/snap_run
@@ -16,7 +16,7 @@
 
 export HOME=$SNAP_DATA
 export COUCHDB_ARGS_FILE=${SNAP_DATA}/etc/vm.args
-export ERL_FLAGS="-couch_ini ${SNAP}/rel/couchdb/etc/default.ini 
${SNAP_DATA}/etc/local.d ${SNAP_DATA}/etc/local.ini"
+export ERL_FLAGS="-couch_ini ${SNAP}/rel/couchdb/etc/default.ini 
${SNAP}/rel/couchdb/etc/default.d ${SNAP_DATA}/etc/local.ini 
${SNAP_DATA}/etc/local.d"
 
 mkdir -p ${SNAP_DATA}/etc 
 
@@ -26,7 +26,6 @@ fi
 
 if [ ! -d ${SNAP_DATA}/etc/local.d ]; then
     mkdir ${SNAP_DATA}/etc/local.d
-    cp ${SNAP}/rel/couchdb/etc/local.d/*.ini ${SNAP_DATA}/etc/local.d
 fi
 
 if [ ! -e ${SNAP_DATA}/etc/local.ini ]; then
diff --git a/snap/snapcraft.yaml b/snap/snapcraft.yaml
index 8dd7dea..1eadf03 100644
--- a/snap/snapcraft.yaml
+++ b/snap/snapcraft.yaml
@@ -58,7 +58,7 @@ parts:
         plugin: dump
         source: ./snap/
         organize:
-            couchdb.ini: rel/couchdb/etc/local.d/couchdb.ini
+            couchdb.ini: rel/couchdb/etc/default.d/couchdb.ini
             snap_run: rel/couchdb/bin/snap_run
 
     packages:


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to