This is an automated email from the ASF dual-hosted git repository. awong pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/kudu.git
commit 2d62c427bf0e4d62df2f0cb355ebfc03705070a1 Author: Andrew Wong <[email protected]> AuthorDate: Thu Jul 29 18:43:57 2021 -0700 [helm] fix usage of multiple directories This patch updates the /helm implementation to use the new FS_WAL_DIR and FS_DATA_DIRS environment variables. This patch also updates the README with steps I used to test the changes. I verified that I could update the number of directories in kudu/values.yaml, confirming that the servers used multiple directories. Below is a snippet of the logs from one tserver with multiple directories (5, set via kudu/values.yaml). Previously, even with multiple storage devices set, each server would use a single directory /var/lib/kudu for storage. $ kubectl logs kudu-tserver-1 I0730 00:36:07.404302 1 tablet_server_runner.cc:78] Tablet server non-default flags: --use_hybrid_clock=false --fs_data_dirs=/mnt/disk1,/mnt/disk2,/mnt/disk3,/mnt/disk4 --fs_wal_dir=/mnt/disk0 --webserver_doc_root=/opt/kudu/www --tserver_master_addrs=kudu-master-0.kudu-masters.default.svc.cluster.local,kudu-master-1.kudu-masters.default.svc.cluster.local,kudu-master-2.kudu-masters.default.svc.cluster.local --heap_profile_path=/tmp/kudu.1 --stderrthreshold=0 Tablet server version: kudu 1.16.0-SNAPSHOT revision 2ceec7749 build type RELEASE ... A rendered README can be found here: https://github.com/andrwng/kudu/blob/helm_multidir/kubernetes/helm/README.adoc Change-Id: I583dff69c809e0851052c98267740a3e70c60efa Reviewed-on: http://gerrit.cloudera.org:8080/17739 Tested-by: Kudu Jenkins Reviewed-by: Bankim Bhavsar <[email protected]> --- kubernetes/helm/README.adoc | 94 +++++++++++++++++++----- kubernetes/helm/kudu/templates/_helmutils.tpl | 26 +++++-- kubernetes/helm/kudu/templates/kudu-service.yaml | 8 ++ kubernetes/helm/kudu/values.yaml | 6 ++ 4 files changed, 110 insertions(+), 24 deletions(-) diff --git a/kubernetes/helm/README.adoc b/kubernetes/helm/README.adoc index 0bc9019..3d7acdc 100644 --- a/kubernetes/helm/README.adoc +++ b/kubernetes/helm/README.adoc @@ -21,10 +21,13 @@ NOTE: All of this work is experimental and subject to change or removal. == Getting Started -Helm — The package manager for Kubernetes. Helps to define, install, and upgrade Kubernetes applications +Helm — The package manager for Kubernetes. Helps to define, install, and +upgrade Kubernetes applications. NOTE: Read more about Helm here https://helm.sh/docs/using_helm/#quickstart +The below instructions rely on having Kudu Docker images built. + ==== System Requirements kubectl @@ -32,31 +35,86 @@ NOTE: Read more about Helm here https://helm.sh/docs/using_helm/#quickstart docker helm -==== Build Kudu Docker Image - - ../../docker/docker-build.py +=== Using Helm v3 + +. Deploy a Kubernetes cluster with a cluster manager choice. For the sake of + local development, `minikube` is a fine choice: ++ +---- +$ minikube start +---- + +. If you have made changes to your Docker images, or would otherwise like to + use local images rather than those found on DockerHub, ensure your Kubernetes + cluster can access your desired Docker images. ++ +---- +$ minikube cache add apache/kudu:latest +---- + +. Install the `apache-kudu` Helm Chart. Optionally supply an edited + `values.yaml` file. This deploys a Kudu cluster using the `minikube` + container. ++ +---- +$ helm install -f kudu/values.yaml apache-kudu ./kudu +---- + +. Verify the cluster is running and view its logs. ++ +---- +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +kudu-master-0 1/1 Running 0 108s +kudu-master-1 1/1 Running 0 108s +kudu-master-2 1/1 Running 0 108s +kudu-tserver-0 1/1 Running 0 108s +kudu-tserver-1 1/1 Running 0 108s +kudu-tserver-2 1/1 Running 0 108s + +$ kubectl logs kudu-tserver-1 +I0730 00:36:07.404302 1 tablet_server_runner.cc:78] Tablet server non-default flags: +--use_hybrid_clock=false +--fs_data_dirs=/mnt/disk1,/mnt/disk2,/mnt/disk3,/mnt/disk4 +--fs_wal_dir=/mnt/disk0 +--webserver_doc_root=/opt/kudu/www +--tserver_master_addrs=kudu-master-0.kudu-masters.default.svc.cluster.local,kudu-master-1.kudu-masters.default.svc.cluster.local,kudu-master-2.kudu-masters.default.svc.cluster.local +--heap_profile_path=/tmp/kudu.1 +--stderrthreshold=0 + +Tablet server version: +... +---- + +. To stop Kudu, uninstall the Helm Chart. ++ +---- +$ helm uninstall apache-kudu +---- + +=== Using Helm v2 ==== Creating Namespace - kubectl create -f ../namespace.yaml + $ kubectl create -f ../namespace.yaml ==== Creating ServiceAccount And Role Binding (RBAC) - kubectl create -f kudu-rbac.yaml + $ kubectl create -f kudu-rbac.yaml ==== Initializing Helm Tiller - helm init --service-account kudu-helm --tiller-namespace apache-kudu --upgrade --wait + $ helm init --service-account kudu-helm --tiller-namespace apache-kudu --upgrade --wait Check if tiller is initialized and you should not see any authorization errors. - helm ls --namespace apache-kudu --tiller-namespace apache-kudu + $ helm ls --namespace apache-kudu --tiller-namespace apache-kudu ==== Helm Launch Kudu cluster - helm install kudu --namespace apache-kudu --name apache-kudu --tiller-namespace apache-kudu --wait + $ helm install kudu --namespace apache-kudu --name apache-kudu --tiller-namespace apache-kudu --wait - helm install kudu -f kudu-expose-all.yaml --namespace apache-kudu --name apache-kudu --tiller-namespace apache-kudu --wait + $ helm install kudu -f kudu-expose-all.yaml --namespace apache-kudu --name apache-kudu --tiller-namespace apache-kudu --wait You should see below output on stdout @@ -94,29 +152,29 @@ kudu-tserver-pdb N/A 1 1 12s ==== Port Forward The Kudu Master UI - kubectl port-forward kudu-master-0 8051 -n apache-kudu + $ kubectl port-forward kudu-master-0 8051 -n apache-kudu OR - minikube service kudu-master-service --url -n apache-kudu + $ minikube service kudu-master-service --url -n apache-kudu ==== Destroy The Kudu Cluster - helm del --purge apache-kudu --tiller-namespace apache-kudu + $ helm del --purge apache-kudu --tiller-namespace apache-kudu ==== Display Kudu Master Logs: - kubectl logs kudu-master-0 --namespace apache-kudu + $ kubectl logs kudu-master-0 --namespace apache-kudu === Testing Helm Charts # helm-template : it will render chart templates locally and display the output. - helm template kudu + $ helm template kudu # To render just one template in a chart - helm template kudu -x templates/kudu-service.yaml + $ helm template kudu -x templates/kudu-service.yaml # helm lint: examines a chart for possible issues, useful to validate chart dependencies. - helm lint kudu --namespace apache-kudu --tiller-namespace apache-kudu + $ helm lint kudu --namespace apache-kudu --tiller-namespace apache-kudu # The argument this command takes is the name of a deployed release. # The tests to be run are defined in the chart that was installed. - helm test apache-kudu --tiller-namespace apache-kudu \ No newline at end of file + $ helm test apache-kudu --tiller-namespace apache-kudu diff --git a/kubernetes/helm/kudu/templates/_helmutils.tpl b/kubernetes/helm/kudu/templates/_helmutils.tpl index 7ec2665..7296c06 100644 --- a/kubernetes/helm/kudu/templates/_helmutils.tpl +++ b/kubernetes/helm/kudu/templates/_helmutils.tpl @@ -51,16 +51,12 @@ Create chart name and version as used by the chart label. {{- end -}} {{/* - Generate Kudu Masters String +Generate Kudu Masters String */}} {{- define "kudu.gen_kudu_masters" -}} {{- $master_replicas := .Values.replicas.master | int -}} {{- $domain_name := .Values.domainName -}} - {{- range .Values.Services }} - {{- if eq .name "kudu-masters" }} - {{range $index := until $master_replicas }}{{if ne $index 0}},{{end}}kudu-master-{{ $index }}.kudu-masters.$(NAMESPACE).svc.{{ $domain_name }}{{end}} - {{- end -}} - {{- end -}} + {{range $index := until $master_replicas }}{{if ne $index 0}},{{end}}kudu-master-{{ $index }}.kudu-masters.$(NAMESPACE).svc.cluster.local{{end}} {{- end -}} {{/* @@ -72,3 +68,21 @@ Ensures that the number of replicas running is never brought below the number ne {{- $master_replicas := 100 | div (100 | sub (2 | div ($master_replicas | add 100))) -}} {{- printf "%d" $master_replicas -}} {{- end -}} + +{{/* +Generate a comma-separated list of Kudu Master data directories +NOTE: the first directory is for WALs, so start the count at index 1. +*/}} +{{- define "kudu.gen_kudu_master_data_dirs" -}} +{{- $num_dirs := .Values.storage.master.count | int -}} +{{range $index := untilStep 1 $num_dirs 1 -}}{{if ne $index 1}},{{end}}/mnt/disk{{ $index }}{{end}} +{{- end -}} + +{{/* +Generate a comma-separated list of Kudu Tablet Server data directories +NOTE: the first directory is for WALs, so start the count at index 1. +*/}} +{{- define "kudu.gen_kudu_tserver_data_dirs" -}} +{{- $num_dirs := .Values.storage.tserver.count | int -}} +{{range $index := untilStep 1 $num_dirs 1 -}}{{if ne $index 1}},{{end}}/mnt/disk{{ $index }}{{end}} +{{- end -}} diff --git a/kubernetes/helm/kudu/templates/kudu-service.yaml b/kubernetes/helm/kudu/templates/kudu-service.yaml index a49ac60..aa61c0e 100644 --- a/kubernetes/helm/kudu/templates/kudu-service.yaml +++ b/kubernetes/helm/kudu/templates/kudu-service.yaml @@ -164,6 +164,14 @@ spec: valueFrom: fieldRef: fieldPath: metadata.namespace + - name: FS_WAL_DIR + value: /mnt/disk0 + - name: FS_DATA_DIRS + {{ if eq .name "kudu-masters" }} + value: "{{ include "kudu.gen_kudu_master_data_dirs" $head | trim }}" + {{ else }} + value: "{{ include "kudu.gen_kudu_tserver_data_dirs" $head | trim }}" + {{ end }} - name: KUDU_MASTERS value: "{{ include "kudu.gen_kudu_masters" $head | trim }}" resources: diff --git a/kubernetes/helm/kudu/values.yaml b/kubernetes/helm/kudu/values.yaml index 0da7928..710e728 100644 --- a/kubernetes/helm/kudu/values.yaml +++ b/kubernetes/helm/kudu/values.yaml @@ -25,6 +25,12 @@ Image: tag: latest pullPolicy: IfNotPresent +# NOTE: WALs and data directories will be placed in separate volumes if more +# than one is available. If only one is available, they will be placed in the +# same volume. Thus, users should avoid switching between a single-volume +# deployment and multi-volume deployments, as that would move the data +# directory location from /mnt/disk0 to /mnt/disk1,/mnt/disk2,..., and servers +# would be unable to find their existing data directory in /mnt/disk0! storage: master: count: 3
