This is an automated email from the ASF dual-hosted git repository.

mmerli pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pulsar.git


The following commit(s) were added to refs/heads/master by this push:
     new c6c7def   [documentation][deploy] Update deployment instructions for 
deploying pulsar to minikube (#2362)
c6c7def is described below

commit c6c7def97b09d9c070d558b75e43bcea5040aa7c
Author: Sijie Guo <[email protected]>
AuthorDate: Wed Aug 15 08:56:50 2018 -0700

     [documentation][deploy] Update deployment instructions for deploying 
pulsar to minikube (#2362)
    
    * [documentation][deploy] Update deployment instructions for deploying to 
Minikube
    
    * Enable functions workers
---
 deployment/kubernetes/generic/admin.yaml           | 44 +++++++++++++
 deployment/kubernetes/generic/bookie.yaml          | 50 ++-------------
 deployment/kubernetes/generic/broker.yaml          | 42 ++++--------
 .../kubernetes/generic/cluster-metadata.yaml       | 42 ++++++++++++
 deployment/kubernetes/generic/monitoring.yaml      | 26 ++++----
 deployment/kubernetes/generic/proxy.yaml           | 12 ++--
 deployment/kubernetes/generic/zookeeper.yaml       |  2 +-
 site2/docs/deploy-kubernetes.md                    | 75 ++++++++++++++++++----
 8 files changed, 189 insertions(+), 104 deletions(-)

diff --git a/deployment/kubernetes/generic/admin.yaml 
b/deployment/kubernetes/generic/admin.yaml
new file mode 100644
index 0000000..8c84b28
--- /dev/null
+++ b/deployment/kubernetes/generic/admin.yaml
@@ -0,0 +1,44 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+apiVersion: v1
+kind: Pod
+metadata:
+    name: pulsar-admin
+spec:
+    containers:
+      - name: pulsar-admin
+        image: apachepulsar/pulsar:latest
+        command: ["sh", "-c"]
+        args:
+          - >
+            bin/apply-config-from-env.py conf/client.conf &&
+            bin/apply-config-from-env.py conf/pulsar_env.sh &&
+            bin/apply-config-from-env.py conf/pulsar_tools_env.sh &&
+            sleep 10000000000
+        envFrom:
+          - configMapRef:
+                name: broker-config
+        env:
+          - name: webServiceUrl
+            value: "http://proxy:8080/";
+          - name: brokerServiceUrl
+            value: "pulsar://proxy:6650/"
+          - name: PULSAR_MEM
+            value: "\"-Xms64m -Xmx128m\""
diff --git a/deployment/kubernetes/generic/bookie.yaml 
b/deployment/kubernetes/generic/bookie.yaml
index 7955097..fd1d044 100644
--- a/deployment/kubernetes/generic/bookie.yaml
+++ b/deployment/kubernetes/generic/bookie.yaml
@@ -23,10 +23,10 @@ kind: ConfigMap
 metadata:
     name: bookie-config
 data:
-    PULSAR_MEM: "\" -Xms512m -Xmx512m -XX:MaxDirectMemorySize=1g\""
-    dbStorage_writeCacheMaxSizeMb: "256" # Write cache size (direct memory)
-    dbStorage_readAheadCacheMaxSizeMb: "256" # Read cache size (direct memory)
-    zkServers: zk-0.zookeeper,zk-1.zookeeper,zk-2.zookeeper
+    PULSAR_MEM: "\" -Xms64m -Xmx256m -XX:MaxDirectMemorySize=256m\""
+    dbStorage_writeCacheMaxSizeMb: "32" # Write cache size (direct memory)
+    dbStorage_readAheadCacheMaxSizeMb: "32" # Read cache size (direct memory)
+    zkServers: zookeeper
     statsProviderClass: 
org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider
 ---
 
@@ -49,7 +49,7 @@ spec:
                 component: bookkeeper
                 # Specify cluster to allow aggregation by cluster in
                 # the metrics
-                cluster: us-central
+                cluster: local
             annotations:
                 prometheus.io/scrape: "true"
                 prometheus.io/port: "8000"
@@ -57,7 +57,7 @@ spec:
         spec:
             containers:
               - name: bookie
-                image: apachepulsar/pulsar:latest
+                image: apachepulsar/pulsar-all:latest
                 command: ["sh", "-c"]
                 args:
                   - >
@@ -89,7 +89,7 @@ spec:
                 # The first time, initialize BK metadata in zookeeper
                  # Otherwise ignore error if it's already there
               - name: bookie-metaformat
-                image: apachepulsar/pulsar:latest
+                image: apachepulsar/pulsar-all:latest
                 command: ["sh", "-c"]
                 args:
                   - >
@@ -130,39 +130,3 @@ spec:
     selector:
         app: pulsar
         component: bookkeeper
-
----
-##
-## Run BookKeeper auto-recovery from a different set of containers
-## Auto-Recovery makes sure to restore the replication factor when any bookie
-## crashes and it's not recovering on its own.
-##
-apiVersion: apps/v1beta1
-kind: Deployment
-metadata:
-    name: bookie-autorecovery
-spec:
-    replicas: 2
-    template:
-        metadata:
-            labels:
-                app: pulsar
-                component: bookkeeper-replication
-        spec:
-            containers:
-              - name: replication-worker
-                image: apachepulsar/pulsar:latest
-                command: ["sh", "-c"]
-                args:
-                  - >
-                    bin/apply-config-from-env.py conf/bookkeeper.conf &&
-                    bin/bookkeeper autorecovery
-                envFrom:
-                  - configMapRef:
-                        name: bookie-config
-                env:
-                    ## Configure for lower mem usage
-                  - name: PULSAR_MEM
-                    value: "\" -Xmx256m \""
-                  - name: PULSAR_GC
-                    value: "\"  \""
diff --git a/deployment/kubernetes/generic/broker.yaml 
b/deployment/kubernetes/generic/broker.yaml
index cf760c1..031d992 100644
--- a/deployment/kubernetes/generic/broker.yaml
+++ b/deployment/kubernetes/generic/broker.yaml
@@ -25,10 +25,17 @@ metadata:
 data:
     # Tune for available memory. Increase the heap up to 24G to have
     # better GC behavior at high throughput
-    PULSAR_MEM: "\" -Xms1g -Xmx1g -XX:MaxDirectMemorySize=1g\""
-    zookeeperServers: zk-0.zookeeper,zk-1.zookeeper,zk-2.zookeeper
-    configurationStoreServers: zk-0.zookeeper,zk-1.zookeeper,zk-2.zookeeper
-    clusterName: us-central
+    PULSAR_MEM: "\" -Xms64m -Xmx128m -XX:MaxDirectMemorySize=128m\""
+    zookeeperServers: zookeeper
+    configurationStoreServers: zookeeper
+    clusterName: local
+    # change the managed ledger settings if you have more bookies
+    managedLedgerDefaultEnsembleSize: "1"
+    managedLedgerDefaultWriteQuorum: "1"
+    managedLedgerDefaultAckQuorum: "1"
+    # enable pulsar functions
+    functionsWorkerEnabled: "true"
+    PF_pulsarFunctionsCluster: local
 ---
 ##
 ## Broker deployment definition
@@ -50,12 +57,13 @@ spec:
         spec:
             containers:
               - name: broker
-                image: apachepulsar/pulsar:latest
+                image: apachepulsar/pulsar-all:latest
                 command: ["sh", "-c"]
                 args:
                   - >
                     bin/apply-config-from-env.py conf/broker.conf &&
                     bin/apply-config-from-env.py conf/pulsar_env.sh &&
+                    bin/gen-yml-from-env.py conf/functions_worker.yml &&
                     bin/pulsar broker
                 ports:
                   - containerPort: 8080
@@ -96,27 +104,3 @@ spec:
         component: broker
 
 ---
-
-###
-
-apiVersion: v1
-kind: Pod
-metadata:
-    name: pulsar-admin
-spec:
-    containers:
-      - name: pulsar-admin
-        image: apachepulsar/pulsar:latest
-        command: ["sh", "-c"]
-        args:
-          - >
-            bin/apply-config-from-env.py conf/client.conf &&
-            sleep 10000000000
-        envFrom:
-          - configMapRef:
-                name: broker-config
-        env:
-          - name: webServiceUrl
-            value: http://broker:8080/
-          - name: brokerServiceUrl
-            value: pulsar://broker:6650/
diff --git a/deployment/kubernetes/generic/cluster-metadata.yaml 
b/deployment/kubernetes/generic/cluster-metadata.yaml
new file mode 100644
index 0000000..d502a0e
--- /dev/null
+++ b/deployment/kubernetes/generic/cluster-metadata.yaml
@@ -0,0 +1,42 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+apiVersion: batch/v1
+kind: Job
+metadata:
+  name: pulsar-cluster-metadata-init
+  labels:
+    app: pulsar
+    component: broker
+spec:
+  template:
+    spec:
+      containers:
+        - name: pulsar-cluster-metadata-init-container
+          image: apachepulsar/pulsar:latest
+          command: ["sh", "-c"]
+          args:
+            - >
+              bin/pulsar initialize-cluster-metadata \
+                --cluster local \
+                --zookeeper zookeeper \
+                --configuration-store zookeeper \
+                --web-service-url 
http://broker.default.svc.cluster.local:8080/ \
+                --broker-service-url 
pulsar://broker.default.svc.cluster.local:6650/ || true;
+      restartPolicy: Never
diff --git a/deployment/kubernetes/generic/monitoring.yaml 
b/deployment/kubernetes/generic/monitoring.yaml
index 2873203..089690f 100644
--- a/deployment/kubernetes/generic/monitoring.yaml
+++ b/deployment/kubernetes/generic/monitoring.yaml
@@ -101,10 +101,12 @@ metadata:
         app: pulsar
         component: prometheus
 spec:
+    type: NodePort
     ports:
-      - port: 9090
-        name: server
-    clusterIP: None
+      - name: prometheus
+        nodePort: 30003
+        port: 9090
+        protocol: TCP
     selector:
         app: pulsar
         component: prometheus
@@ -144,16 +146,16 @@ metadata:
         app: pulsar
         component: grafana
 spec:
+    type: NodePort
     ports:
-      - port: 3000
-        name: server
-    clusterIP: None
+      - name: grafana
+        nodePort: 30004
+        port: 3000
+        protocol: TCP 
     selector:
         app: pulsar
         component: grafana
 
-
-
 ---
 ## Include detailed Pulsar dashboard
 
@@ -188,10 +190,12 @@ metadata:
         app: pulsar
         component: dashboard
 spec:
+    type: NodePort
     ports:
-      - port: 80
-        name: server
-    clusterIP: None
+      - name: dashboard
+        nodePort: 30005
+        port: 80
+        protocol: TCP
     selector:
         app: pulsar
         component: dashboard
diff --git a/deployment/kubernetes/generic/proxy.yaml 
b/deployment/kubernetes/generic/proxy.yaml
index 6715cf1..8268835 100644
--- a/deployment/kubernetes/generic/proxy.yaml
+++ b/deployment/kubernetes/generic/proxy.yaml
@@ -23,10 +23,10 @@ kind: ConfigMap
 metadata:
     name: proxy-config
 data:
-    PULSAR_MEM: "\" -Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g\""
-    zookeeperServers: zk-0.zookeeper,zk-1.zookeeper,zk-2.zookeeper
-    configurationStoreServers: zk-0.zookeeper,zk-1.zookeeper,zk-2.zookeeper
-    clusterName: us-central
+    PULSAR_MEM: "\" -Xms64m -Xmx128m -XX:MaxDirectMemorySize=128m\""
+    zookeeperServers: zookeeper
+    configurationStoreServers: zookeeper
+    clusterName: local
 ---
 ##
 ## Proxy deployment definition
@@ -36,7 +36,7 @@ kind: Deployment
 metadata:
     name: proxy
 spec:
-    replicas: 5
+    replicas: 2
     template:
         metadata:
             labels:
@@ -48,7 +48,7 @@ spec:
         spec:
             containers:
               - name: proxy
-                image: apachepulsar/pulsar:latest
+                image: apachepulsar/pulsar-all:latest
                 command: ["sh", "-c"]
                 args:
                   - >
diff --git a/deployment/kubernetes/generic/zookeeper.yaml 
b/deployment/kubernetes/generic/zookeeper.yaml
index 9cad9cb..e0be77d 100644
--- a/deployment/kubernetes/generic/zookeeper.yaml
+++ b/deployment/kubernetes/generic/zookeeper.yaml
@@ -79,7 +79,7 @@ spec:
                             topologyKey: "kubernetes.io/hostname"
             containers:
               - name: zookeeper
-                image: apachepulsar/pulsar:latest
+                image: apachepulsar/pulsar-all:latest
                 command: ["sh", "-c"]
                 args:
                   - >
diff --git a/site2/docs/deploy-kubernetes.md b/site2/docs/deploy-kubernetes.md
index c39b779..6f04164 100644
--- a/site2/docs/deploy-kubernetes.md
+++ b/site2/docs/deploy-kubernetes.md
@@ -87,13 +87,37 @@ If `kubectl` is working with your cluster, you can proceed 
to [deploy Pulsar com
 
 Pulsar can be deployed on a custom, non-GKE Kubernetes cluster as well. You 
can find detailed documentation on how to choose a Kubernetes installation 
method that suits your needs in the [Picking the Right 
Solution](https://kubernetes.io/docs/setup/pick-right-solution) guide in the 
Kubernetes docs.
 
-### Local cluster
-
 The easiest way to run a Kubernetes cluster is to do so locally. To install a 
mini local cluster for testing purposes, running in local VMs, you can either:
 
 1. Use [minikube](https://kubernetes.io/docs/getting-started-guides/minikube/) 
to run a single-node Kubernetes cluster
 1. Create a local cluster running on multiple VMs on the same machine
 
+### Minikube
+
+1. [Install and configure 
minikube](https://github.com/kubernetes/minikube#installation) with
+   a [VM driver](https://github.com/kubernetes/minikube#requirements), e.g. 
`kvm2` on Linux or `hyperkit` or `VirtualBox` on macOS.
+1. Create a kubernetes cluster on Minikube.
+    ```shell
+    minikube start --memory=8192 --cpus=4 \
+        --kubernetes-version=v1.10.5
+    ```
+1. Set `kubectl` to use Minikube.
+    ```shell
+    kubectl config use-context minikube
+    ```
+
+In order to use the [Kubernetes 
Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/)
+with local Kubernetes cluster on Minikube, run:
+
+```bash
+$ minikube dashboard
+```
+
+The command will automatically trigger open a webpage in your browser. At 
first your local cluster will be empty,
+but that will change as you begin deploying Pulsar 
[components](#deploying-pulsar-components).
+
+### Multiple VMs
+
 For the second option, follow the 
[instructions](https://github.com/pires/kubernetes-vagrant-coreos-cluster) for 
running Kubernetes using [CoreOS](https://coreos.com/) on 
[Vagrant](https://www.vagrantup.com/). We'll provide an abridged version of 
those instructions here.
 
 
@@ -129,8 +153,6 @@ NAME           STATUS                     AGE       VERSION
 172.17.8.104   Ready                      4m        v1.6.4
 ```
 
-### Dashboard
-
 In order to use the [Kubernetes 
Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/)
 with your local Kubernetes cluster, first use `kubectl` to create a proxy to 
the cluster:
 
 ```bash
@@ -143,9 +165,15 @@ Now you can access the web interface at 
[localhost:8001/ui](http://localhost:800
 
 Now that you've set up a Kubernetes cluster, either on [Google Kubernetes 
Engine](#pulsar-on-google-kubernetes-engine) or on a [custom 
cluster](#pulsar-on-a-custom-kubernetes-cluster), you can begin deploying the 
components that make up Pulsar. The YAML resource definitions for Pulsar 
components can be found in the `kubernetes` folder of the [Pulsar source 
package](pulsar:download_page_url).
 
-In that package, there are two sets of resource definitions, one for Google 
Kubernetes Engine (GKE) in the `deployment/kubernetes/google-kubernetes-engine` 
folder and one for a custom Kubernetes cluster in the 
`deployment/kubernetes/generic` folder. To begin, `cd` into the appropriate 
folder.
+In that package, there are different sets of resource definitions for 
different environments.
 
-### ZooKeeper
+- `deployment/kubernetes/google-kubernetes-engine`: for Google Kubernetes 
Engine (GKE)
+- `deployment/kubernetes/aws`: for AWS
+- `deployment/kubernetes/generic`: for a custom Kubernetes cluster
+
+To begin, `cd` into the appropriate folder.
+
+### Deploy ZooKeeper
 
 You *must* deploy ZooKeeper as the first Pulsar component, as it is a 
dependency for the others.
 
@@ -165,7 +193,7 @@ zk-2      0/1       Running            6          15m
 
 This step may take several minutes, as Kubernetes needs to download the Docker 
image on the VMs.
 
-#### Initialize cluster metadata
+### Initialize cluster metadata
 
 Once ZooKeeper is running, you need to [initialize the 
metadata](#cluster-metadata-initialization) for the Pulsar cluster in 
ZooKeeper. This includes system metadata for 
[BookKeeper](reference-terminology.md#bookkeeper) and Pulsar more broadly. 
There is a Kubernetes 
[job](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/)
 in the `cluster-metadata.yaml` file that you only need to run once:
 
@@ -177,22 +205,23 @@ For the sake of reference, that job runs the following 
command on an ephemeral p
 
 ```bash
 $ bin/pulsar initialize-cluster-metadata \
-  --cluster us-central \
+  --cluster local \
   --zookeeper zookeeper \
   --global-zookeeper zookeeper \
   --web-service-url http://broker.default.svc.cluster.local:8080/ \
   --broker-service-url pulsar://broker.default.svc.cluster.local:6650/
 ```
 
-#### Deploy the rest of the components
+### Deploy the rest of the components
 
 Once cluster metadata has been successfully initialized, you can then deploy 
the bookies, brokers, monitoring stack ([Prometheus](https://prometheus.io), 
[Grafana](https://grafana.com), and the [Pulsar 
dashboard](administration-dashboard.md)), and Pulsar cluster proxy:
 
 ```bash
 $ kubectl apply -f bookie.yaml
 $ kubectl apply -f broker.yaml
-$ kubectl apply -f monitoring.yaml
 $ kubectl apply -f proxy.yaml
+$ kubectl apply -f monitoring.yaml
+$ kubectl apply -f admin.yaml
 ```
 
 You can check on the status of the pods for these components either in the 
Kubernetes Dashboard or using `kubectl`:
@@ -201,7 +230,7 @@ You can check on the status of the pods for these 
components either in the Kuber
 $ kubectl get pods -w -l app=pulsar
 ```
 
-#### Set up properties and namespaces
+### Set up properties and namespaces
 
 Once all of the components are up and running, you'll need to create at least 
one Pulsar tenant and at least one namespace.
 
@@ -218,7 +247,7 @@ Now, any time you run `pulsar-admin`, you will be running 
commands from that pod
 ```bash
 $ pulsar-admin tenants create ten \
   --admin-roles admin \
-  --allowed-clusters us-central
+  --allowed-clusters local
 ```
 
 This command will create a `ns` namespace under the `ten` tenant:
@@ -231,15 +260,16 @@ To verify that everything has gone as planned:
 
 ```bash
 $ pulsar-admin tenants list
+public
 ten
 
 $ pulsar-admin namespaces list ten
-ns
+ten/ns
 ```
 
 Now that you have a namespace and tenant set up, you can move on to 
[experimenting with your Pulsar cluster](#experimenting-with-your-cluster) from 
within the cluster or [connecting to the cluster](#client-connections) using a 
Pulsar client.
 
-#### Experimenting with your cluster
+### Experimenting with your cluster
 
 Now that a tenant and namespace have been created, you can begin experimenting 
with your running Pulsar cluster. Using the same `pulsar-admin` pod via an 
alias, as in the section above, you can use 
[`pulsar-perf`](reference-cli-tools.md#pulsar-perf) to create a test 
[producer](reference-terminology.md#producer) to publish 10,000 messages a 
second on a topic in the [tenant](reference-terminology.md#tenant) and 
[namespace](reference-terminology.md#namespace) you created.
 
@@ -273,6 +303,15 @@ $ pulsar-admin persistent stats 
persistent://public/default/my-topic
 
 The default monitoring stack for Pulsar on Kubernetes has consists of 
[Prometheus](#prometheus), [Grafana](#grafana), and the [Pulsar 
dashbaord](administration-dashboard.md).
 
+> If you deployed the cluster to Minikube, the following monitoring ports are 
mapped at the minikube VM:
+>
+> - Prometheus port: 30003
+> - Grafana port: 30004
+> - Dashboard port: 30005
+>
+> You can use `minikube ip` to find the ip address of the minikube VM, and 
then use their mapped ports
+> to access corresponding services. For example, you can access Pulsar 
dashboard at `http://$(minikube ip):30005`.
+
 #### Prometheus
 
 All Pulsar metrics in Kubernetes are collected by a 
[Prometheus](https://prometheus.io) instance running inside the cluster. 
Typically, there is no need to access Prometheus directly. Instead, you can use 
the [Grafana interface](#grafana) that displays the data stored in Prometheus.
@@ -305,6 +344,14 @@ You can then access the dashboard in your web browser at 
[localhost:8080](http:/
 
 ### Client connections
 
+> If you deployed the cluster to Minikube, the proxy ports are mapped at the 
minikube VM:
+>
+> - Http port: 30001
+> - Pulsar binary protocol port: 30002
+>
+> You can use `minikube ip` to find the ip address of the minikube VM, and 
then use their mapped ports
+> to access corresponding services. For example, pulsar webservice url will be 
at `http://$(minikube ip):30001`.
+
 Once your Pulsar cluster is running on Kubernetes, you can connect to it using 
a Pulsar client. You can fetch the IP address for the Pulsar proxy running in 
your Kubernetes cluster using kubectl:
 
 ```bash

Reply via email to