This is an automated email from the ASF dual-hosted git repository. willholley pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/couchdb-helm.git
commit e366671b0e7825e87761ee8dce9f37aaaedf76fa Author: Will Holley <[email protected]> AuthorDate: Thu Oct 17 16:08:55 2019 +0100 Add e2e testing Adds integration testing using Kind (Kubernetes in Docker). The tests stand up a local Kubernetes cluster and install the current chart using Helm 2.x. --- Makefile | 20 +++-- README.md | 168 +++++---------------------------------- test/ct.yaml | 1 + test/e2e-kind.sh | 100 +++++++++++++++++++++++ test/kind-config.yaml | 0 test/local-path-provisioner.yaml | 108 +++++++++++++++++++++++++ 6 files changed, 245 insertions(+), 152 deletions(-) diff --git a/Makefile b/Makefile index 1c80ce0..6963b96 100644 --- a/Makefile +++ b/Makefile @@ -12,16 +12,24 @@ SHELL=/bin/bash -.PHONY: test -test: +.PHONY: lint +lint: @helm lint couchdb -package: test +.PHONY: package +package: lint @helm package couchdb -.PHONY: package -publish: test +.PHONY: publish +publish: @git checkout gh-pages + @git checkout -b gh-pages-update @helm repo index docs --url https://apache.github.io/couchdb-helm @git add -i - @echo "To complete the publish step, commit and push the chart tgz and updated index to gh-pages" + @git commit + @echo "To complete the publish step, push the branch to your GitHub remote and create a PR against gh-pages" + +# Run end to end tests using KinD +.PHONY: test +test: + ./test/e2e-kind.sh diff --git a/README.md b/README.md index a0bfc15..b17607b 100644 --- a/README.md +++ b/README.md @@ -1,167 +1,43 @@ -# CouchDB +# CouchDB Helm Charts -Apache CouchDB is a database featuring seamless multi-master sync, that scales -from big data to mobile, with an intuitive HTTP/JSON API and designed for -reliability. +This repository contains assets related to the CouchDB Helm chart. -This chart deploys a CouchDB cluster as a StatefulSet. It creates a ClusterIP -Service in front of the Deployment for load balancing by default, but can also -be configured to deploy other Service types or an Ingress Controller. The -default persistence mechanism is simply the ephemeral local filesystem, but -production deployments should set `persistentVolume.enabled` to `true` to attach -storage volumes to each Pod in the Deployment. +## Layout -## TL;DR + * `couchdb`: contains the unbundled Helm chart + * `test`: containes scripts to test the chart locally using [Kind][5] -```bash -$ helm repo add couchdb https://apache.github.io/couchdb-helm -$ helm install couchdb/couchdb --set allowAdminParty=true -``` +## Testing -## Prerequisites +`make test` will run an integration test using [Kind][5]. This stands up a Kubernetes cluster locally and ensures the chart will +deploy using the default options and Helm. -- Kubernetes 1.8+ with Beta APIs enabled +## Releasing -## Installing the Chart +The Helm chart is published to a Helm epository hosted by GitHub pages. This is maintained in the `gh-pages` branch of this repository. -To install the chart with the release name `my-release`: +To publish a new release, perform the following steps: -Add the CouchDB Helm repository: + 1. Create a Helm bundle (*.tgz) for the current couchdb chart + 2. Switch to the `gh-pages` branch + 3. Run `helm repo index docs --url https://apache.github.io/couchdb-helm` to generate the Helm repository index + 4. `git add` the tgz bundle and the `index.yaml` files. Do not delete the old chart bundles! + 5. Commit the changes and create a PR to `gh-pages`. -```bash -$ helm repo add couchdb https://apache.github.io/couchdb-helm -``` +`make publish` automates these steps for you. - -```bash -$ helm install --name my-release couchdb/couchdb -``` - -This will create a Secret containing the admin credentials for the cluster. -Those credentials can be retrieved as follows: - -```bash -$ kubectl get secret my-release-couchdb -o go-template='{{ .data.adminPassword }}' | base64 --decode -``` - -If you prefer to configure the admin credentials directly you can create a -Secret containing `adminUsername`, `adminPassword` and `cookieAuthSecret` keys: - -```bash -$ kubectl create secret generic my-release-couchdb --from-literal=adminUsername=foo --from-literal=adminPassword=bar --from-literal=cookieAuthSecret=baz -``` - -and then install the chart while overriding the `createAdminSecret` setting: - -```bash -$ helm install --name my-release --set createAdminSecret=false couchdb/couchdb -``` - -This Helm chart deploys CouchDB on the Kubernetes cluster in a default -configuration. The [configuration](#configuration) section lists -the parameters that can be configured during installation. - -> **Tip**: List all releases using `helm list` - -## Uninstalling the Chart - -To uninstall/delete the `my-release` Deployment: - -```bash -$ helm delete my-release -``` - -The command removes all the Kubernetes components associated with the chart and -deletes the release. - -## Upgrading an existing Release to a new major version - -A major chart version change (like v0.2.3 -> v1.0.0) indicates that there is an -incompatible breaking change needing manual actions. - -## Migrating from stable/couchdb - -This chart replaces the `stable/couchdb` chart previously hosted by Helm and continues the -version semantics. You can upgrade directly from `stable/couchdb` to this chart using: - -```bash -$ helm repo add couchdb https://apache.github.io/couchdb-helm -$ helm upgrade my-release couchdb/couchdb -``` - -## Configuration - -The following table lists the most commonly configured parameters of the -CouchDB chart and their default values: - -| Parameter | Description | Default | -|---------------------------------|-------------------------------------------------------|----------------------------------------| -| `clusterSize` | The initial number of nodes in the CouchDB cluster | 3 | -| `couchdbConfig` | Map allowing override elements of server .ini config | chttpd.bind_address=any | -| `allowAdminParty` | If enabled, start cluster without admin account | false (requires creating a Secret) | -| `createAdminSecret` | If enabled, create an admin account and cookie secret | true | -| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` | -| `erlangFlags` | Map of flags supplied to the underlying Erlang VM | name: couchdb, setcookie: monster -| `persistentVolume.enabled` | Boolean determining whether to attach a PV to each node | false -| `persistentVolume.size` | If enabled, the size of the persistent volume to attach | 10Gi -| `enableSearch` | Adds a sidecar for Lucene-powered text search | false | - -A variety of other parameters are also configurable. See the comments in the -`values.yaml` file for further details: - -| Parameter | Default | -|---------------------------------|----------------------------------------| -| `adminUsername` | admin | -| `adminPassword` | auto-generated | -| `cookieAuthSecret` | auto-generated | -| `image.repository` | couchdb | -| `image.tag` | 2.3.1 | -| `image.pullPolicy` | IfNotPresent | -| `searchImage.repository` | kocolosk/couchdb-search | -| `searchImage.tag` | 0.1.0 | -| `searchImage.pullPolicy` | IfNotPresent | -| `initImage.repository` | busybox | -| `initImage.tag` | latest | -| `initImage.pullPolicy` | Always | -| `ingress.enabled` | false | -| `ingress.hosts` | chart-example.local | -| `ingress.annotations` | | -| `ingress.tls` | | -| `persistentVolume.accessModes` | ReadWriteOnce | -| `persistentVolume.storageClass` | Default for the Kube cluster | -| `podManagementPolicy` | Parallel | -| `affinity` | | -| `resources` | | -| `service.annotations` | | -| `service.enabled` | true | -| `service.type` | ClusterIP | -| `service.externalPort` | 5984 | -| `dns.clusterDomainSuffix` | cluster.local | - - -## Feedback, Issues, Contributing +## Feedback / Issues / Contributing General feedback is welcome at our [user][1] or [developer][2] mailing lists. Apache CouchDB has a [CONTRIBUTING][3] file with details on how to get started with issue reporting or contributing to the upkeep of this project. In short, -use GitHub Issues, do not report anything on Docker's website. - -## Non-Apache CouchDB Development Team Contributors +use GitHub Issues, do not report anything to the Helm team. -- [@natarajaya](https://github.com/natarajaya) -- [@satchpx](https://github.com/satchpx) -- [@spanato](https://github.com/spanato) -- [@jpds](https://github.com/jpds) -- [@sebastien-prudhomme](https://github.com/sebastien-prudhomme) -- [@stepanstipl](https://github.com/sebastien-stepanstipl) -- [@amatas](https://github.com/amatas) -- [@Chimney42](https://github.com/Chimney42) -- [@mattjmcnaughton](https://github.com/mattjmcnaughton) -- [@mainephd](https://github.com/mainephd) -- [@AdamDang](https://github.com/AdamDang) -- [@mrtyler](https://github.com/mrtyler) +The chart follows the technical guidelines / best practices [maintained][4] by the Helm team. [1]: http://mail-archives.apache.org/mod_mbox/couchdb-user/ [2]: http://mail-archives.apache.org/mod_mbox/couchdb-dev/ [3]: https://github.com/apache/couchdb/blob/master/CONTRIBUTING.md +[4]: https://github.com/helm/charts/blob/master/REVIEW_GUIDELINES.md +[5]: https://github.com/kubernetes-sigs/kind diff --git a/test/ct.yaml b/test/ct.yaml new file mode 100644 index 0000000..d40aa57 --- /dev/null +++ b/test/ct.yaml @@ -0,0 +1 @@ +helm-extra-args: --timeout 800 diff --git a/test/e2e-kind.sh b/test/e2e-kind.sh new file mode 100755 index 0000000..59c1dbe --- /dev/null +++ b/test/e2e-kind.sh @@ -0,0 +1,100 @@ +#!/usr/bin/env bash + +set -o errexit +set -o nounset +set -o pipefail + +readonly CT_VERSION=v2.3.3 +readonly KIND_VERSION=v0.5.1 +readonly CLUSTER_NAME=chart-testing +readonly K8S_VERSION=v1.14.3 + +run_ct_container() { + echo 'Running ct container...' + docker run --rm --interactive --detach --network host --name ct \ + --volume "$(pwd)/test/ct.yaml:/etc/ct/ct.yaml" \ + --volume "$(pwd):/workdir" \ + --workdir /workdir \ + "quay.io/helmpack/chart-testing:$CT_VERSION" \ + cat + echo +} + +cleanup() { + echo 'Removing ct container...' + docker kill ct > /dev/null 2>&1 + + kind delete cluster --name "$CLUSTER_NAME" || true + + echo 'Done!' +} + +docker_exec() { + docker exec --interactive ct "$@" +} + +create_kind_cluster() { + if ! [ -x "$(command -v kind)" ]; then + echo 'Installing kind...' + + curl -sSLo kind "https://github.com/kubernetes-sigs/kind/releases/download/$KIND_VERSION/kind-linux-amd64" + chmod +x kind + sudo mv kind /usr/local/bin/kind + fi + + kind delete cluster --name "$CLUSTER_NAME" || true + kind create cluster --name "$CLUSTER_NAME" --config test/kind-config.yaml --image "kindest/node:$K8S_VERSION" --wait 60s + + docker_exec mkdir -p /root/.kube + + echo 'Copying kubeconfig to container...' + local kubeconfig + kubeconfig="$(kind get kubeconfig-path --name "$CLUSTER_NAME")" + docker cp "$kubeconfig" ct:/root/.kube/config + + docker_exec kubectl cluster-info + echo + + docker_exec kubectl get nodes + echo + + echo 'Cluster ready!' + echo +} + +install_tiller() { + echo 'Installing Tiller...' + docker_exec kubectl --namespace kube-system create sa tiller + docker_exec kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller + docker_exec helm init --service-account tiller --upgrade --wait + echo +} + +install_local-path-provisioner() { + # kind doesn't support Dynamic PVC provisioning yet, this is one ways to get it working + # https://github.com/rancher/local-path-provisioner + + # Remove default storage class. It will be recreated by local-path-provisioner + docker_exec kubectl delete storageclass standard + + echo 'Installing local-path-provisioner...' + docker_exec kubectl apply -f test/local-path-provisioner.yaml + echo +} + +install_charts() { + docker_exec ct lint-and-install --chart-repos couchdb=https://apache.github.io/couchdb-helm --chart-dirs . + echo +} + +main() { + run_ct_container + trap cleanup EXIT + + create_kind_cluster + install_local-path-provisioner + install_tiller + install_charts +} + +main diff --git a/test/kind-config.yaml b/test/kind-config.yaml new file mode 100644 index 0000000..e69de29 diff --git a/test/local-path-provisioner.yaml b/test/local-path-provisioner.yaml new file mode 100644 index 0000000..3eda3a1 --- /dev/null +++ b/test/local-path-provisioner.yaml @@ -0,0 +1,108 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: local-path-storage +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: local-path-provisioner-service-account + namespace: local-path-storage +--- +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRole +metadata: + name: local-path-provisioner-role + namespace: local-path-storage +rules: +- apiGroups: [""] + resources: ["nodes", "persistentvolumeclaims"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["endpoints", "persistentvolumes", "pods"] + verbs: ["*"] +- apiGroups: [""] + resources: ["events"] + verbs: ["create", "patch"] +- apiGroups: ["storage.k8s.io"] + resources: ["storageclasses"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRoleBinding +metadata: + name: local-path-provisioner-bind + namespace: local-path-storage +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: local-path-provisioner-role +subjects: +- kind: ServiceAccount + name: local-path-provisioner-service-account + namespace: local-path-storage +--- +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: local-path-provisioner + namespace: local-path-storage +spec: + replicas: 1 + selector: + matchLabels: + app: local-path-provisioner + template: + metadata: + labels: + app: local-path-provisioner + spec: + serviceAccountName: local-path-provisioner-service-account + containers: + - name: local-path-provisioner + image: rancher/local-path-provisioner:v0.0.11 + imagePullPolicy: Always + command: + - local-path-provisioner + - --debug + - start + - --config + - /etc/config/config.json + volumeMounts: + - name: config-volume + mountPath: /etc/config/ + env: + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + volumes: + - name: config-volume + configMap: + name: local-path-config +--- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: local-path + annotations: + storageclass.kubernetes.io/is-default-class: "true" +provisioner: rancher.io/local-path +volumeBindingMode: WaitForFirstConsumer +reclaimPolicy: Delete +--- +kind: ConfigMap +apiVersion: v1 +metadata: + name: local-path-config + namespace: local-path-storage +data: + config.json: |- + { + "nodePathMap":[ + { + "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES", + "paths":["/opt/local-path-provisioner"] + } + ] + }
