This is an automated email from the ASF dual-hosted git repository.
mmerli pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pulsar.git
The following commit(s) were added to refs/heads/master by this push:
new 2bc4812 [documentation][deploy] Improve helm deployment script to
deploy Pulsar to minikube (#2363)
2bc4812 is described below
commit 2bc48129bfb4433182f1b05a9008faef76e387ac
Author: Sijie Guo <[email protected]>
AuthorDate: Thu Aug 16 00:25:49 2018 -0700
[documentation][deploy] Improve helm deployment script to deploy Pulsar to
minikube (#2363)
* [documentation][deploy] Update deployment instructions for deploying to
Minikube
* Enable functions workers
* [documentation][deploy] Improve helm deployment script to deploy Pulsar
to minikube
### Changes
- update the helm scripts: bookie/autorecovery/broker pods should wait
until metadata is initialized
- disable `autoRecovery` on bookies since we start `AutoRecovery` in
separate pods
- enable function worker on brokers
- provide a values file for minikube
- update documentation for using helm chart to deploy a cluster to minikube
* move the service type definition to values file
---
deployment/kubernetes/helm/README.md | 54 +++++++++
.../pulsar/templates/autorecovery-deployment.yaml | 2 +-
.../pulsar/templates/bookkeeper-configmap.yaml | 2 +
.../pulsar/templates/bookkeeper-statefulset.yaml | 2 +-
.../helm/pulsar/templates/broker-configmap.yaml | 2 +
.../helm/pulsar/templates/broker-deployment.yaml | 3 +-
.../pulsar/templates/prometheus-deployment.yaml | 6 +
.../helm/pulsar/templates/proxy-deployment.yaml | 2 +-
.../helm/pulsar/templates/proxy-service.yaml | 2 +-
.../helm/pulsar/templates/zookeeper-metadata.yaml | 4 +
.../helm/pulsar/{values.yaml => values-mini.yaml} | 125 +++++++++------------
deployment/kubernetes/helm/pulsar/values.yaml | 9 +-
site2/docs/deploy-kubernetes.md | 35 +++++-
13 files changed, 166 insertions(+), 82 deletions(-)
diff --git a/deployment/kubernetes/helm/README.md
b/deployment/kubernetes/helm/README.md
index 627b0fc..36f16bc 100644
--- a/deployment/kubernetes/helm/README.md
+++ b/deployment/kubernetes/helm/README.md
@@ -21,3 +21,57 @@
This directory contains the Helm Chart required
to do a complete Pulsar deployment on Kubernetes.
+
+## Install Helm
+
+Before you start, you need to install helm.
+Following [helm
documentation](https://docs.helm.sh/using_helm/#installing-helm) to install it.
+
+## Deploy Pulsar
+
+### Minikube
+
+#### Install Minikube
+
+[Install and configure
minikube](https://github.com/kubernetes/minikube#installation) with
+a [VM driver](https://github.com/kubernetes/minikube#requirements), e.g.
`kvm2` on Linux
+or `hyperkit` or `VirtualBox` on macOS.
+
+#### Create a K8S cluster on Minikube
+
+```
+minikube start --memory=8192 --cpus=4 \
+ --kubernetes-version=v1.10.5
+```
+
+#### Set kubectl to use Minikube.
+
+```
+kubectl config use-context minikube
+```
+
+After you created a K8S cluster on Minikube, you can access its dashboard via
following command:
+
+```
+minikube dashboard
+```
+
+The command will automatically trigger open a webpage in your browser.
+
+#### Install Pulsar Chart
+
+Assume you already cloned pulsar repo in `PULSAR_HOME` directory.
+
+1. Go to Pulsar helm chart directory
+ ```shell
+ cd ${PULSAR_HOME}/deployment/kubernetes/helm
+ ```
+1. Install helm chart.
+ ```shell
+ helm install --values pulsar/values-mini.yaml ./pulsar
+ ```
+
+Once the helm chart is completed on installation, you can access the cluster
via:
+
+- Web service url: `http://$(minikube ip):30001/`
+- Pulsar service url: `pulsar://$(minikube ip):30002/`
diff --git
a/deployment/kubernetes/helm/pulsar/templates/autorecovery-deployment.yaml
b/deployment/kubernetes/helm/pulsar/templates/autorecovery-deployment.yaml
index fb98e1b..fe1dd08 100644
--- a/deployment/kubernetes/helm/pulsar/templates/autorecovery-deployment.yaml
+++ b/deployment/kubernetes/helm/pulsar/templates/autorecovery-deployment.yaml
@@ -83,7 +83,7 @@ spec:
command: ["sh", "-c"]
args:
- >-
- until nslookup {{ template "pulsar.fullname" . }}-{{
.Values.zookeeper.component }}-{{ add (.Values.zookeeper.replicaCount | int) -1
}}.{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}.{{
.Values.namespace }}; do
+ until bin/pulsar zookeeper-shell -server {{ template
"pulsar.fullname" . }}-{{ .Values.zookeeper.component }} ls /admin/clusters/{{
template "pulsar.fullname" . }}; do
sleep 3;
done;
containers:
diff --git
a/deployment/kubernetes/helm/pulsar/templates/bookkeeper-configmap.yaml
b/deployment/kubernetes/helm/pulsar/templates/bookkeeper-configmap.yaml
index 50ca87d..31c66df 100644
--- a/deployment/kubernetes/helm/pulsar/templates/bookkeeper-configmap.yaml
+++ b/deployment/kubernetes/helm/pulsar/templates/bookkeeper-configmap.yaml
@@ -33,4 +33,6 @@ data:
zkServers:
{{- $global := . }}
{{ range $i, $e := until (.Values.zookeeper.replicaCount | int) }}{{ if ne
$i 0 }},{{ end }}{{ printf "%s-%s-%s-%d.%s-%s-%s" $global.Release.Name
$global.Chart.Name $global.Values.zookeeper.component $i $global.Release.Name
$global.Chart.Name $global.Values.zookeeper.component }}{{ end }}
+ # disable auto recovery on bookies since we will start AutoRecovery in
separated pods
+ autoRecoveryDaemonEnabled: "false"
{{ toYaml .Values.bookkeeper.configData | indent 2 }}
diff --git
a/deployment/kubernetes/helm/pulsar/templates/bookkeeper-statefulset.yaml
b/deployment/kubernetes/helm/pulsar/templates/bookkeeper-statefulset.yaml
index a9c872a5..5d6387a 100644
--- a/deployment/kubernetes/helm/pulsar/templates/bookkeeper-statefulset.yaml
+++ b/deployment/kubernetes/helm/pulsar/templates/bookkeeper-statefulset.yaml
@@ -86,7 +86,7 @@ spec:
command: ["sh", "-c"]
args:
- >-
- until nslookup {{ template "pulsar.fullname" . }}-{{
.Values.zookeeper.component }}-{{ add (.Values.zookeeper.replicaCount | int) -1
}}.{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}.{{
.Values.namespace }}; do
+ until bin/pulsar zookeeper-shell -server {{ template
"pulsar.fullname" . }}-{{ .Values.zookeeper.component }} ls /admin/clusters/{{
template "pulsar.fullname" . }}; do
sleep 3;
done;
# This initContainer will make sure that the bookeeper
diff --git a/deployment/kubernetes/helm/pulsar/templates/broker-configmap.yaml
b/deployment/kubernetes/helm/pulsar/templates/broker-configmap.yaml
index 4f7edb5..7d7df75 100644
--- a/deployment/kubernetes/helm/pulsar/templates/broker-configmap.yaml
+++ b/deployment/kubernetes/helm/pulsar/templates/broker-configmap.yaml
@@ -37,4 +37,6 @@ data:
{{- $global := . }}
{{ range $i, $e := until (.Values.zookeeper.replicaCount | int) }}{{ if ne
$i 0 }},{{ end }}{{ printf "%s-%s-%s-%d.%s-%s-%s" $global.Release.Name
$global.Chart.Name $global.Values.zookeeper.component $i $global.Release.Name
$global.Chart.Name $global.Values.zookeeper.component }}{{ end }}
clusterName: {{ template "pulsar.fullname" . }}
+ functionsWorkerEnabled: "true"
+ PF_pulsarFunctionsCluster: {{ template "pulsar.fullname" . }}
{{ toYaml .Values.broker.configData | indent 2 }}
diff --git a/deployment/kubernetes/helm/pulsar/templates/broker-deployment.yaml
b/deployment/kubernetes/helm/pulsar/templates/broker-deployment.yaml
index f9d8b7f..b4733df 100644
--- a/deployment/kubernetes/helm/pulsar/templates/broker-deployment.yaml
+++ b/deployment/kubernetes/helm/pulsar/templates/broker-deployment.yaml
@@ -82,7 +82,7 @@ spec:
command: ["sh", "-c"]
args:
- >-
- until nslookup {{ template "pulsar.fullname" . }}-{{
.Values.zookeeper.component }}-{{ add (.Values.zookeeper.replicaCount | int) -1
}}.{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}.{{
.Values.namespace }}; do
+ until bin/pulsar zookeeper-shell -server {{ template
"pulsar.fullname" . }}-{{ .Values.zookeeper.component }} ls /admin/clusters/{{
template "pulsar.fullname" . }}; do
sleep 3;
done;
containers:
@@ -98,6 +98,7 @@ spec:
- >
bin/apply-config-from-env.py conf/broker.conf &&
bin/apply-config-from-env.py conf/pulsar_env.sh &&
+ bin/gen-yml-from-env.py conf/functions_worker.yml &&
bin/pulsar broker
ports:
- name: http
diff --git
a/deployment/kubernetes/helm/pulsar/templates/prometheus-deployment.yaml
b/deployment/kubernetes/helm/pulsar/templates/prometheus-deployment.yaml
index 223fc6a..58a143d 100644
--- a/deployment/kubernetes/helm/pulsar/templates/prometheus-deployment.yaml
+++ b/deployment/kubernetes/helm/pulsar/templates/prometheus-deployment.yaml
@@ -76,7 +76,13 @@ spec:
- name: "{{ template "pulsar.fullname" . }}-{{
.Values.prometheus.component }}-config"
configMap:
name: "{{ template "pulsar.fullname" . }}-{{
.Values.prometheus.component }}"
+ {{- if not .Values.prometheus_persistence }}
+ - name: "{{ template "pulsar.fullname" . }}-{{
.Values.prometheus.component }}-{{ .Values.prometheus.volumes.data.name }}"
+ emptyDir: {}
+ {{- end }}
+ {{- if .Values.prometheus_persistence }}
- name: "{{ template "pulsar.fullname" . }}-{{
.Values.prometheus.component }}-{{ .Values.prometheus.volumes.data.name }}"
persistentVolumeClaim:
claimName: "{{ template "pulsar.fullname" . }}-{{
.Values.prometheus.component }}-{{ .Values.prometheus.volumes.data.name }}"
+ {{- end }}
{{- end }}
diff --git a/deployment/kubernetes/helm/pulsar/templates/proxy-deployment.yaml
b/deployment/kubernetes/helm/pulsar/templates/proxy-deployment.yaml
index 4567ed3..5180985 100644
--- a/deployment/kubernetes/helm/pulsar/templates/proxy-deployment.yaml
+++ b/deployment/kubernetes/helm/pulsar/templates/proxy-deployment.yaml
@@ -83,7 +83,7 @@ spec:
command: ["sh", "-c"]
args:
- >-
- until nslookup {{ template "pulsar.fullname" . }}-{{
.Values.zookeeper.component }}-{{ add (.Values.zookeeper.replicaCount | int) -1
}}.{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}.{{
.Values.namespace }}; do
+ until bin/pulsar zookeeper-shell -server {{ template
"pulsar.fullname" . }}-{{ .Values.zookeeper.component }} ls /admin/clusters/{{
template "pulsar.fullname" . }}; do
sleep 3;
done;
containers:
diff --git a/deployment/kubernetes/helm/pulsar/templates/proxy-service.yaml
b/deployment/kubernetes/helm/pulsar/templates/proxy-service.yaml
index 9949371..522cfbf 100644
--- a/deployment/kubernetes/helm/pulsar/templates/proxy-service.yaml
+++ b/deployment/kubernetes/helm/pulsar/templates/proxy-service.yaml
@@ -33,7 +33,7 @@ metadata:
annotations:
{{ toYaml .Values.proxy.service.annotations | indent 4 }}
spec:
- type: NodePort
+ type: {{ .Values.proxy.service.type }}
ports:
{{ toYaml .Values.proxy.service.ports | indent 2 }}
selector:
diff --git
a/deployment/kubernetes/helm/pulsar/templates/zookeeper-metadata.yaml
b/deployment/kubernetes/helm/pulsar/templates/zookeeper-metadata.yaml
index 4a62710..bb25f46 100644
--- a/deployment/kubernetes/helm/pulsar/templates/zookeeper-metadata.yaml
+++ b/deployment/kubernetes/helm/pulsar/templates/zookeeper-metadata.yaml
@@ -46,6 +46,10 @@ spec:
- name: "{{ template "pulsar.fullname" . }}-{{
.Values.zookeeperMetadata.component }}"
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
+ {{- if .Values.zookeeper_metadata.resources }}
+ resources:
+{{ toYaml .Values.zookeeper_metadata.resources | indent 10 }}
+ {{- end }}
command: ["sh", "-c"]
args:
- >
diff --git a/deployment/kubernetes/helm/pulsar/values.yaml
b/deployment/kubernetes/helm/pulsar/values-mini.yaml
similarity index 71%
copy from deployment/kubernetes/helm/pulsar/values.yaml
copy to deployment/kubernetes/helm/pulsar/values-mini.yaml
index 2a274b5..94a5bc3 100644
--- a/deployment/kubernetes/helm/pulsar/values.yaml
+++ b/deployment/kubernetes/helm/pulsar/values-mini.yaml
@@ -26,6 +26,11 @@ namespaceCreate: yes
## purposes, they will be deployed with emptDir
persistence: no
+## If prometheus_persistence is enabled, prometheus will be deployed
+## with PersistentVolumeClaims, otherwise, for test purposes, they
+## will be deployed with emptyDir
+prometheus_persistence: no
+
## which extra components to deploy
extra:
# Pulsar proxy
@@ -41,7 +46,7 @@ extra:
## Which pulsar image to use
image:
- repository: apachepulsar/pulsar
+ repository: apachepulsar/pulsar-all
tag: latest
pullPolicy: IfNotPresent
@@ -63,24 +68,17 @@ zookeeper:
gracePeriod: 0
resources:
requests:
- memory: 15Gi
- cpu: 4
+ memory: 64Mi
+ cpu: 0.1
volumes:
data:
name: data
- size: 20Gi
- ## If the storage class is left undefined when using persistence
- ## the default storage class for the cluster will be used.
- ##
- # storageClass:
- # type: pd-ssd
- # fsType: xfs
- # provisioner: kubernetes.io/gce-pd
+ size: 2Gi
## Zookeeper configmap
## templates/zookeeper-configmap.yaml
##
configData:
- PULSAR_MEM: "\"-Xms15g -Xmx15g -Dcom.sun.management.jmxremote
-Djute.maxbuffer=10485760 -XX:+ParallelRefProcEnabled
-XX:+UnlockExperimentalVMOptions -XX:+AggressiveOpts -XX:+DoEscapeAnalysis
-XX:+DisableExplicitGC -XX:+PerfDisableSharedMem -Dzookeeper.forceSync=no\""
+ PULSAR_MEM: "\"-Xms64m -Xmx128m -Dcom.sun.management.jmxremote
-Djute.maxbuffer=10485760 -XX:+ParallelRefProcEnabled
-XX:+UnlockExperimentalVMOptions -XX:+AggressiveOpts -XX:+DoEscapeAnalysis
-XX:+DisableExplicitGC -XX:+PerfDisableSharedMem -Dzookeeper.forceSync=no\""
PULSAR_GC: "\"-XX:+UseG1GC -XX:MaxGCPauseMillis=10\""
## Zookeeper service
## templates/zookeeper-service.yaml
@@ -116,7 +114,7 @@ zookeeperMetadata:
##
bookkeeper:
component: bookkeeper
- replicaCount: 4
+ replicaCount: 3
updateStrategy:
type: OnDelete
podManagementPolicy: OrderedReady
@@ -129,37 +127,22 @@ bookkeeper:
gracePeriod: 0
resources:
requests:
- memory: 15Gi
- cpu: 4
+ memory: 128Mi
+ cpu: 0.2
volumes:
journal:
name: journal
- size: 50Gi
- ## If the storage class is left undefined when using persistence
- ## the default storage class for the cluster will be used.
- ##
- # storageClass:
- # type: pd-ssd
- # fsType: xfs
- # provisioner: kubernetes.io/gce-pd
+ size: 5Gi
ledgers:
name: ledgers
- size: 50Gi
- ## If the storage class is left undefined when using persistence
- ## the default storage class for the cluster will be used.
- ##
- # storageClass:
- # type: pd-ssd
- # fsType: xfs
- # provisioner: kubernetes.io/gce-pd
+ size: 5Gi
## Bookkeeper configmap
## templates/bookkeeper-configmap.yaml
##
configData:
- PULSAR_MEM: "\"-Xms15g -Xmx15g -XX:MaxDirectMemorySize=15g
-Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.linkCapacity=1024
-XX:+UseG1GC -XX:MaxGCPauseMillis=10 -XX:+ParallelRefProcEnabled
-XX:+UnlockExperimentalVMOptions -XX:+AggressiveOpts -XX:+DoEscapeAnalysis
-XX:ParallelGCThreads=32 -XX:ConcGCThreads=32 -XX:G1NewSizePercent=50
-XX:+DisableExplicitGC -XX:-ResizePLAB -XX:+ExitOnOutOfMemoryError
-XX:+PerfDisableSharedMem -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+P
[...]
- dbStorage_writeCacheMaxSizeMb: "2048"
- dbStorage_readAheadCacheMaxSizeMb: "2048"
- dbStorage_rocksDB_blockCacheSize: "268435456"
+ PULSAR_MEM: "\"-Xms128m -Xmx256m -XX:MaxDirectMemorySize=128m
-Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.linkCapacity=1024
-XX:+UseG1GC -XX:MaxGCPauseMillis=10 -XX:+ParallelRefProcEnabled
-XX:+UnlockExperimentalVMOptions -XX:+AggressiveOpts -XX:+DoEscapeAnalysis
-XX:ParallelGCThreads=32 -XX:ConcGCThreads=32 -XX:G1NewSizePercent=50
-XX:+DisableExplicitGC -XX:-ResizePLAB -XX:+ExitOnOutOfMemoryError
-XX:+PerfDisableSharedMem -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX [...]
+ dbStorage_writeCacheMaxSizeMb: "32"
+ dbStorage_readAheadCacheMaxSizeMb: "32"
journalMaxSizeMB: "2048"
statsProviderClass:
org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider
useHostNameAsBookieID: "true"
@@ -184,7 +167,7 @@ bookkeeper:
##
broker:
component: broker
- replicaCount: 3
+ replicaCount: 2
# nodeSelector:
# cloud.google.com/gke-nodepool: default-pool
annotations:
@@ -194,16 +177,16 @@ broker:
gracePeriod: 0
resources:
requests:
- memory: 15Gi
- cpu: 4
+ memory: 128Mi
+ cpu: 0.2
## Broker configmap
## templates/broker-configmap.yaml
##
configData:
- PULSAR_MEM: "\"-Xms15g -Xmx15g -XX:MaxDirectMemorySize=15g
-Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.linkCapacity=1024
-XX:+ParallelRefProcEnabled -XX:+UnlockExperimentalVMOptions
-XX:+AggressiveOpts -XX:+DoEscapeAnalysis -XX:ParallelGCThreads=32
-XX:ConcGCThreads=32 -XX:G1NewSizePercent=50 -XX:+DisableExplicitGC
-XX:-ResizePLAB -XX:+ExitOnOutOfMemoryError -XX:+PerfDisableSharedMem\""
+ PULSAR_MEM: "\"-Xms128m -Xmx256m -XX:MaxDirectMemorySize=128m
-Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.linkCapacity=1024
-XX:+ParallelRefProcEnabled -XX:+UnlockExperimentalVMOptions
-XX:+AggressiveOpts -XX:+DoEscapeAnalysis -XX:ParallelGCThreads=32
-XX:ConcGCThreads=32 -XX:G1NewSizePercent=50 -XX:+DisableExplicitGC
-XX:-ResizePLAB -XX:+ExitOnOutOfMemoryError -XX:+PerfDisableSharedMem\""
PULSAR_GC: "\"-XX:+UseG1GC -XX:MaxGCPauseMillis=10\""
- managedLedgerDefaultEnsembleSize: "3"
- managedLedgerDefaultWriteQuorum: "3"
+ managedLedgerDefaultEnsembleSize: "2"
+ managedLedgerDefaultWriteQuorum: "2"
managedLedgerDefaultAckQuorum: "2"
deduplicationEnabled: "false"
exposeTopicLevelMetricsInPrometheus: "true"
@@ -229,7 +212,7 @@ broker:
##
proxy:
component: proxy
- replicaCount: 3
+ replicaCount: 1
# nodeSelector:
# cloud.google.com/gke-nodepool: default-pool
annotations:
@@ -239,18 +222,19 @@ proxy:
gracePeriod: 0
resources:
requests:
- memory: 4Gi
- cpu: 1
+ memory: 64Mi
+ cpu: 0.1
## Proxy configmap
## templates/proxy-configmap.yaml
##
configData:
- PULSAR_MEM: "\"-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g\""
+ PULSAR_MEM: "\"-Xms64m -Xmx128m -XX:MaxDirectMemorySize=64m\""
## Proxy service
## templates/proxy-service.yaml
##
service:
annotations: {}
+ type: NodePort
ports:
- name: http
port: 8080
@@ -280,13 +264,13 @@ autoRecovery:
gracePeriod: 0
resources:
requests:
- memory: 1Gi
- cpu: 250m
+ memory: 64Mi
+ cpu: 0.05
## Bookkeeper auto-recovery configmap
## templates/autorecovery-configmap.yaml
##
configData:
- PULSAR_MEM: "\" -Xms1g -Xmx1g \""
+ PULSAR_MEM: "\" -Xms64m -Xmx128m \""
## Pulsar Extra: Dashboard
## templates/dashboard-deployment.yaml
@@ -299,14 +283,14 @@ dashboard:
annotations: {}
tolarations: []
gracePeriod: 0
+ resources:
+ requests:
+ memory: 64Mi
+ cpu: 0.1
image:
repository: apachepulsar/pulsar-dashboard
tag: latest
pullPolicy: IfNotPresent
- resources:
- requests:
- memory: 1Gi
- cpu: 250m
## Dashboard service
## templates/dashboard-service.yaml
##
@@ -329,13 +313,13 @@ bastion:
gracePeriod: 0
resources:
requests:
- memory: 1Gi
- cpu: 250m
+ memory: 128Mi
+ cpu: 0.1
## Bastion configmap
## templates/bastion-configmap.yaml
##
configData:
- PULSAR_MEM: "\"-Xms1g -Xmx1g -XX:MaxDirectMemorySize=1g\""
+ PULSAR_MEM: "\"-Xms128m -Xmx256m -XX:MaxDirectMemorySize=128m\""
## Monitoring Stack: Prometheus
## templates/prometheus-deployment.yaml
@@ -348,25 +332,18 @@ prometheus:
annotations: {}
tolarations: []
gracePeriod: 0
+ resources:
+ requests:
+ memory: 64Mi
+ cpu: 0.1
image:
repository: prom/prometheus
tag: v1.6.3
pullPolicy: IfNotPresent
- resources:
- requests:
- memory: 4Gi
- cpu: 1
volumes:
data:
name: data
- size: 50Gi
- ## If the storage class is left undefined when using persistence
- ## the default storage class for the cluster will be used.
- ##
- # storageClass:
- # type: pd-standard
- # fsType: xfs
- # provisioner: kubernetes.io/gce-pd
+ size: 2Gi
## Prometheus service
## templates/prometheus-service.yaml
##
@@ -387,14 +364,14 @@ grafana:
annotations: {}
tolarations: []
gracePeriod: 0
+ resources:
+ requests:
+ memory: 64Mi
+ cpu: 0.1
image:
repository: apachepulsar/pulsar-grafana
tag: latest
pullPolicy: IfNotPresent
- resources:
- requests:
- memory: 4Gi
- cpu: 1
## Grafana service
## templates/grafana-service.yaml
##
@@ -403,3 +380,9 @@ grafana:
ports:
- name: server
port: 3000
+
+zookeeper_metadata:
+ resources:
+ requests:
+ memory: 128Mi
+ cpu: 0.1
diff --git a/deployment/kubernetes/helm/pulsar/values.yaml
b/deployment/kubernetes/helm/pulsar/values.yaml
index 2a274b5..edbb973 100644
--- a/deployment/kubernetes/helm/pulsar/values.yaml
+++ b/deployment/kubernetes/helm/pulsar/values.yaml
@@ -23,9 +23,14 @@ namespaceCreate: yes
## If persistence is enabled, components that has state will
## be deployed with PersistentVolumeClaims, otherwise, for test
-## purposes, they will be deployed with emptDir
+## purposes, they will be deployed with emptyDir
persistence: no
+## If prometheus_persistence is enabled, prometheus will be deployed
+## with PersistentVolumeClaims, otherwise, for test purposes, they
+## will be deployed with emptyDir
+prometheus_persistence: yes
+
## which extra components to deploy
extra:
# Pulsar proxy
@@ -41,7 +46,7 @@ extra:
## Which pulsar image to use
image:
- repository: apachepulsar/pulsar
+ repository: apachepulsar/pulsar-all
tag: latest
pullPolicy: IfNotPresent
diff --git a/site2/docs/deploy-kubernetes.md b/site2/docs/deploy-kubernetes.md
index 6f04164..3d0dd91 100644
--- a/site2/docs/deploy-kubernetes.md
+++ b/site2/docs/deploy-kubernetes.md
@@ -67,7 +67,8 @@ $ gcloud container clusters get-credentials
pulsar-gke-cluster \
$ kubectl proxy
```
-By default, the proxy will be opened on port 8001. Now you can navigate to
[localhost:8001/ui](http://localhost:8001/ui) in your browser to access the
dashboard. At first your GKE cluster will be empty, but that will change as you
begin deploying Pulsar [components](#deploying-pulsar-components).
+By default, the proxy will be opened on port 8001. Now you can navigate to
[localhost:8001/ui](http://localhost:8001/ui) in your browser to access the
dashboard. At first your GKE cluster will be empty, but that will change as you
begin deploying Pulsar components using `kubectl` [component by
component](#deploying-pulsar-components),
+or using [`helm`](#deploying-pulsar-components-helm).
## Pulsar on Amazon Web Services
@@ -81,7 +82,8 @@ When you create a cluster using those instructions, your
`kubectl` config in `~/
$ kubectl get nodes
```
-If `kubectl` is working with your cluster, you can proceed to [deploy Pulsar
components](#deploying-pulsar-components).
+If `kubectl` is working with your cluster, you can proceed to deploy Pulsar
components using `kubectl` [component by
component](#deploying-pulsar-components),
+or using [`helm`](#deploying-pulsar-components-helm).
## Pulsar on a custom Kubernetes cluster
@@ -114,7 +116,8 @@ $ minikube dashboard
```
The command will automatically trigger open a webpage in your browser. At
first your local cluster will be empty,
-but that will change as you begin deploying Pulsar
[components](#deploying-pulsar-components).
+but that will change as you begin deploying Pulsar components using `kubectl`
[component by component](#deploying-pulsar-components),
+or using [`helm`](#deploying-pulsar-components-helm).
### Multiple VMs
@@ -159,7 +162,9 @@ In order to use the [Kubernetes
Dashboard](https://kubernetes.io/docs/tasks/acce
$ kubectl proxy
```
-Now you can access the web interface at
[localhost:8001/ui](http://localhost:8001/ui). At first your local cluster will
be empty, but that will change as you begin deploying Pulsar
[components](#deploying-pulsar-components).
+Now you can access the web interface at
[localhost:8001/ui](http://localhost:8001/ui). At first your local cluster will
be empty,
+but that will change as you begin deploying Pulsar components using `kubectl`
[component by component](#deploying-pulsar-components),
+or using [`helm`](#deploying-pulsar-components-helm).
## Deploying Pulsar components
@@ -368,3 +373,25 @@ You can find client documentation for:
* [C++](client-libraries-cpp.md)
+## Deploying Pulsar components (helm)
+
+Pulsar also provides a [Helm](https://docs.helm.sh/) chart for deploying a
Pulsar cluster to Kubernetes. Before you start,
+make sure you follow [Helm documentation](https://docs.helm.sh/using_helm) to
install helm.
+
+> Assum you have cloned pulsar repo under a `PULSAR_HOME` directory.
+
+### Minikube
+
+1. Go to Pulsar helm chart directory
+ ```shell
+ cd ${PULSAR_HOME}/deployment/kubernetes/helm
+ ```
+1. Install helm chart to a K8S cluster on Minikube.
+ ```shell
+ helm install --values pulsar/values-mini.yaml ./pulsar
+ ```
+
+Once the helm chart is completed on installation, you can access the cluster
via:
+
+- Web service url: `http://$(minikube ip):30001/`
+- Pulsar service url: `pulsar://$(minikube ip):30002/`