This is an automated email from the ASF dual-hosted git repository.

ywkim pushed a commit to branch cnb
in repository https://gitbox.apache.org/repos/asf/bigtop.git


The following commit(s) were added to refs/heads/cnb by this push:
     new 2c15c08  BIGTOP-3262: Use default storageClass
2c15c08 is described below

commit 2c15c082986f4380bcaab5238aa85d66906d27ee
Author: Youngwoo Kim <yw...@apache.org>
AuthorDate: Thu Oct 24 12:05:53 2019 +0900

    BIGTOP-3262: Use default storageClass
---
 README.md                     | 42 +++++++++++++++++++++++++++++++++++++++---
 README_STORAGE.md             | 14 --------------
 kafka/values.yaml             |  1 -
 kubectl/plugin/kubectl-bigtop | 10 ++++++++++
 prometheus/values.yaml        | 14 +++++++-------
 zookeeper/values.yaml         |  2 +-
 6 files changed, 57 insertions(+), 26 deletions(-)

diff --git a/README.md b/README.md
index b89e7e4..a865f6f 100755
--- a/README.md
+++ b/README.md
@@ -96,11 +96,22 @@ For example, the stable helm charts don't properly 
configure zepplin, allow for
 
 # Immediately Get Started with Deployment and Smoke Testing of Cloud Native 
BigTop
 
+Minikube is the easiest tool to run a single-node Kubernetes cluster.
+
+```
+$ cd $BIGTOP_HOME
+$ minikube start --cpus 8 --memory 8196 --container-runtime=cri-o 
+$ kubectl cluster-info
+
+```
+
+## Set up 3-Node Kubernetes cluster via Kubespray on local machine
+
 Prerequisites:
 - Vagrant
 - Java
 
-## Set up 3-Node Kubernetes cluster via Kubespray on local machine
+If you want a multi-node cluster on local machine, you can create the cluster 
using Kubespray:
 ```
 $ cd $BIGTOP_HOME
 $ ./gradlew kubespray-clean kubespray-download && cd dl/ && tar xvfz 
kubespray-2.11.0.tar.gz
@@ -120,6 +131,33 @@ k8s-1$ kubectl bigtop kubectl-config && kubectl bigtop 
helm-deploy
 ```
 
 ## Storage
+
+The easiest way to get simple persistent volumes with dynamic volume 
provisiong on a one node cluster:
+```
+$ helm repo add rimusz https://charts.rimusz.net
+$ helm repo update
+$ helm upgrade --install hostpath-provisioner \
+--namespace kube-system \
+--set storageClass.defaultClass=true \
+rimusz/hostpath-provisioner
+
+$ kubectl get storageclass
+```
+
+Mark ```hostpath``` StorageClass as default:
+```
+$ kubectl patch storageclass hostpath -p '{"metadata": 
{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
+$ kubectl get storageclass
+```
+
+On Minikube, there is 'standard' storage class as default storage class. you 
can make 'hostpath' storage class as default:
+```
+$ kubectl patch storageclass standard -p '{"metadata": 
{"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
+$ kubectl get storageclass
+```
+
+### Rook
+
 You need to install ```lvm2``` package for Rook-Ceph:
 ```
 # Centos
@@ -130,8 +168,6 @@ sudo apt-get install -y lvm2
 ```
 Refer to https://rook.io/docs/rook/v1.1/k8s-pre-reqs.html for prerequisites on 
Rook
 
-### Rook Ceph
-
 Run ```download``` task to get Rook binary:
 ```
 $ ./gradlew rook-clean rook-download && cd dl/ && tar xvfz rook-1.1.2.tar.gz
diff --git a/README_STORAGE.md b/README_STORAGE.md
deleted file mode 100755
index 431261a..0000000
--- a/README_STORAGE.md
+++ /dev/null
@@ -1,14 +0,0 @@
-This inlcudes the various storage recipes curated for
-use in a bigdata distro that would run on a cloud native platform.
-
-
-
-- Minio: Global object store to support spark/kafka/etc
-  You can install it from the yamls in this repo, or else,
-  `helm install --name minio stable/minio --namespace=bigdata` directly.
-
-- Hbase: For use by tools like PredictionIO.
-  For installation,
-   - git clone https://github.com/warp-poke/hbase-helm
-   - cd to hbase-helm
-   - modify configmap to use nifi-zookeeper as the zk.quorum field.
diff --git a/kafka/values.yaml b/kafka/values.yaml
index bd4b789..4d2e3e6 100644
--- a/kafka/values.yaml
+++ b/kafka/values.yaml
@@ -231,7 +231,6 @@ persistence:
   ##   GKE, AWS & OpenStack)
   ##
   # storageClass:
-  storageClass: "rook-ceph-block"
 
 jmx:
   ## Rules to apply to the Prometheus JMX Exporter.  Note while lots of stats 
have been cleaned and exposed,
diff --git a/kubectl/plugin/kubectl-bigtop b/kubectl/plugin/kubectl-bigtop
index a59beb3..e2c921e 100755
--- a/kubectl/plugin/kubectl-bigtop
+++ b/kubectl/plugin/kubectl-bigtop
@@ -62,6 +62,16 @@ if [[ "$1" == "helm-deploy" ]]; then
     exit 0
 fi
 
+# Hostpah storageClass
+if [[ "$1" == "hostpath-deploy" ]]; then
+    helm repo add rimusz https://charts.rimusz.net
+    helm repo update
+    helm upgrade --install hostpath-provisioner --namespace kube-system --set 
storageClass.defaultClass=true rimusz/hostpath-provisioner
+    sleep 5s ; kubectl get storageclass
+
+    exit 0
+fi
+
 # Install Rook-Ceph
 if [[ "$1" == "rook-ceph-deploy" ]]; then
 
diff --git a/prometheus/values.yaml b/prometheus/values.yaml
index a5cf728..d56f725 100644
--- a/prometheus/values.yaml
+++ b/prometheus/values.yaml
@@ -1571,14 +1571,14 @@ prometheus:
     ## ref: 
https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/storage.md
     ##
     storageSpec: {}
-    #  volumeClaimTemplate:
-    #    spec:
+      volumeClaimTemplate:
+        spec:
     #      storageClassName: gluster
-    #      accessModes: ["ReadWriteOnce"]
-    #      resources:
-    #        requests:
-    #          storage: 50Gi
-    #    selector: {}
+          accessModes: ["ReadWriteOnce"]
+          resources:
+            requests:
+              storage: 5Gi
+        selector: {}
 
     ## AdditionalScrapeConfigs allows specifying additional Prometheus scrape 
configurations. Scrape configurations
     ## are appended to the configurations generated by the Prometheus 
Operator. Job configurations must have the form
diff --git a/zookeeper/values.yaml b/zookeeper/values.yaml
index bf0e999..1c92dcb 100644
--- a/zookeeper/values.yaml
+++ b/zookeeper/values.yaml
@@ -121,7 +121,7 @@ persistence:
   ##   GKE, AWS & OpenStack)
   ##
   # storageClass: "-"
-  storageClass: "rook-ceph-block"
+  
   accessMode: ReadWriteOnce
   size: 5Gi
 

Reply via email to