jiangzho commented on code in PR #2: URL: https://github.com/apache/spark-kubernetes-operator/pull/2#discussion_r1552639165
########## spark-operator-docs/operations.md: ########## @@ -0,0 +1,122 @@ +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + +## Manage Your Spark Operator + +The operator installation is managed by a helm chart. To install run: + +``` +helm install spark-kubernetes-operator \ + -f build-tools/helm/spark-kubernetes-operator/values.yaml \ + build-tools/helm/spark-kubernetes-operator/ +``` + +Alternatively to install the operator (and also the helm chart) to a specific namespace: + +``` +helm install spark-kubernetes-operator \ + -f build-tools/helm/spark-kubernetes-operator/values.yaml \ + build-tools/helm/spark-kubernetes-operator/ \ + --namespace spark-system --create-namespace +``` + +Note that in this case you will need to update the namespace in the examples accordingly. + +### Spark Application Namespaces + +By default, Spark applications are created in the same namespace as the operator deployment. +You many also configure the chart deployment to add necessary RBAC resources for +applications to enable them running in additional namespaces. + +## Overriding configuration parameters during Helm install + +Helm provides different ways to override the default installation parameters (contained +in `values.yaml`) for the Helm chart. + +To override single parameters you can use `--set`, for example: + +``` +helm install --set image.repository=<my_registory>/spark-kubernetes-operator \ + -f build-tools/helm/spark-kubernetes-operator/values.yaml \ + build-tools/helm/spark-kubernetes-operator/ +``` + +You can also provide multiple custom values file by using the `-f` flag, the latest takes +higher precedence: + +``` +helm install spark-kubernetes-operator \ + -f build-tools/helm/spark-kubernetes-operator/values.yaml \ + -f my_values.yaml \ + build-tools/helm/spark-kubernetes-operator/ +``` + +The configurable parameters of the Helm chart and which default values as detailed in the +following table: + +| Parameters | Description | +|----------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| image.repository | The image repository of spark-kubernetes-operator. | +| image.pullPolicy | The image pull policy of spark-kubernetes-operator. | +| image.tag | The image tag of spark-kubernetes-operator. | +| image.digest | The image tag of spark-kubernetes-operator. If set then it takes precedence and the image tag will be ignored. | +| imagePullSecrets | The image pull secrets of spark-kubernetes-operator. | +| operatorDeployment.replica | Operator replica count. Must be 1 unless leader election is configured. | +| operatorDeployment.strategy.type | Operator pod upgrade strategy. Must be Recreate unless leader election is configured. | +| operatorDeployment.operatorPod.annotations | Custom annotations to be added to the operator pod | +| operatorDeployment.operatorPod.labels | Custom labels to be added to the operator pod | +| operatorDeployment.operatorPod.nodeSelector | Custom nodeSelector to be added to the operator pod. | +| operatorDeployment.operatorPod.topologySpreadConstraints | Custom topologySpreadConstraints to be added to the operator pod. | +| operatorDeployment.operatorPod.dnsConfig | DNS configuration to be used by the operator pod. | +| operatorDeployment.operatorPod.volumes | Additional volumes to be added to the operator pod. | +| operatorDeployment.operatorPod.priorityClassName | Priority class name to be used for the operator pod | +| operatorDeployment.operatorPod.securityContext | Security context overrides for the operator pod | +| operatorDeployment.operatorContainer.jvmArgs | JVM arg override for the operator container. | +| operatorDeployment.operatorContainer.env | Custom env to be added to the operator container. | +| operatorDeployment.operatorContainer.envFrom | Custom envFrom to be added to the operator container, e.g. for downward API. | +| operatorDeployment.operatorContainer.probes | Probe config for the operator container. | +| operatorDeployment.operatorContainer.securityContext | Security context overrides for the operator container. | +| operatorDeployment.operatorContainer.resources | Resources for the operator container. | +| operatorDeployment.additionalContainers | Additional containers to be added to the operator pod, e.g. sidecar. | +| operatorRbac.serviceAccount.create | Whether to crete service account for operator to use. | +| operatorRbac.clusterRole.create | Whether to crete ClusterRole for operator to use. If disabled, a role would be created in operator & app namespaces | +| operatorRbac.clusterRoleBinding.create | Whether to crete ClusterRoleBinding for operator to use. If disabled, a rolebinding would be created in operator & app namespaces | +| operatorRbac.clusterRole.configManagement.roleName | Role name for operator configuration management (hot property loading and leader election) | +| appResources.namespaces.create | Whether to create dedicated namespaces for Spark apps. | +| appResources.namespaces.watchGivenNamespacesOnly | When enabled, operator would by default only watch namespace(s) provided in data field | +| appResources.namespaces.data | list of namespaces to create for apps | Review Comment: it's more like the former : When this is set, the mentioned namespace would be created. Also, RBAC resources required by Spark apps (service accounts, roles, rolebindings .etc) would be created. And when `appResources.namespaces.watchGivenNamespacesOnly` is set to true, operator would only watch these namespaces. Technically operator can watch at cluster level, as long as the other namespaces are created & required rbac created separately. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
