daisy-ycguo closed pull request #390: improve documentation of configurations 
choices
URL: https://github.com/apache/incubator-openwhisk-deploy-kube/pull/390
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/README.md b/README.md
index b8907e5..40014e8 100644
--- a/README.md
+++ b/README.md
@@ -52,8 +52,7 @@ containerized applications. [Helm](https://helm.sh/) is a 
package
 manager for Kubernetes that simplifies the management of Kubernetes
 applications. You do not need to have detailed knowledge of either Kubernetes 
or
 Helm to use this project, but you may find it useful to review their
-basic documentation at the links above to become familiar with
-their key concepts and terminology.
+basic documentation to become familiar with their key concepts and terminology.
 
 ## Kubernetes
 
@@ -78,10 +77,9 @@ your cluster.
 [setup instructions](docs/k8s-dind-cluster.md) because the default
 setup of kubeadm-dind-cluster does *not* meet the requirements for
 running OpenWhisk.
-3. Windows: We believe that just like with MacOS, the built-in
-Kubernetes support in Docker for Windows version 18.06 or later should
-be sufficient to run OpenWhisk.  We would welcome a pull request with
-provide detailed setup instructions for Windows.
+3. Windows: You should be able to use the built-in Kubernetes support
+in Docker for Windows version 18.06 or later.
+We would welcome a pull request with detailed setup instructions for Windows.
 
 ### Using Minikube
 
@@ -98,7 +96,7 @@ subject to the cluster meeting the [technical
 requirements](docs/k8s-technical-requirements.md).  We have
 detailed documentation on using Kubernetes clusters from the following
 major cloud providers:
-* [IBM (IKS)](docs/k8s-ibm-public.md)
+* [IBM (IKS)](docs/k8s-ibm-public.md) and [IBM (ICP)](docs/k8s-ibm-private.md)
 * [Google (GKE)](docs/k8s-google.md)
 * [Amazon (EKS)](docs/k8s-aws.md)
 
@@ -112,7 +110,7 @@ consists of the `helm` command line tool that you install 
on your
 development machine and the `tiller` runtime that is deployed on your
 Kubernetes cluster.
 
-For detailed instructions on installing Helm, see these 
[instructions](docs/helm.md).
+For details on installing Helm, see these [instructions](docs/helm.md).
 
 In short if you already have the `helm` cli installed on your development 
machine,
 you will need to execute these two commands and wait a few seconds for the
@@ -151,7 +149,7 @@ easily access services in a Kubernetes-native way, you can 
configure
 your OpenWhisk deployment to enable that by either using the
 
[KubernetesContainerFactory](docs/configurationChoices.md#invoker-container-factory)
 or setting the value of `invoker.DNS` when you create the `mycluster.yaml`
-to customize your deployment.
+to customize your deployment ([see DNS 
options](docs/configurationChoices.md#user-action-container-dns)).
 
 ## Initial setup
 
@@ -164,7 +162,7 @@ scheduler. For a single node cluster, simply do
 ```shell
 kubectl label nodes --all openwhisk-role=invoker
 ```
-If you have a multi-node cluster, for each node <INVOKER_NODE_NAME>
+If you have a multi-node cluster, then for each node <INVOKER_NODE_NAME>
 you want to be an invoker, execute
 ```shell
 $ kubectl label nodes <INVOKER_NODE_NAME> openwhisk-role=invoker
diff --git a/docs/configurationChoices.md b/docs/configurationChoices.md
index ad1a448..9a49947 100644
--- a/docs/configurationChoices.md
+++ b/docs/configurationChoices.md
@@ -226,9 +226,9 @@ component on Kubernetes (selected by picking a
       create, schedule, and manage the Pods that contain the user function
       containers. The pros and cons of this design are roughly the
       inverse of `DockerContainerFactory`.  Kubernetes pod management
-      operations have higher latency and exercise newer code paths in
-      the Invoker.  However, this design fully leverages Kubernetes to
-      manage the execution resources for user functions.
+      operations have higher latency and without additional configuration
+      (see below) can result in poor performance. However, this design
+      fully leverages Kubernetes to manage the execution resources for user 
functions.
 
 You can control the selection of the ContainerFactory by adding either
 ```yaml
@@ -244,9 +244,29 @@ invoker:
 ```
 to your `mycluster.yaml`
 
+For scalability, you will probably want to use `replicaCount` to
+deploy more than one Invoker when using the KubernetesContainerFactory.
+You will also need to override the value of `whisk.containerPool.userMemory`
+to a significantly larger value when using the KubernetesContainerFactory
+to better match the overall memory available on invoker worker nodes divided by
+the number of Invokers you are creating.
+
+When using the KubernetesContainerFactory, the invoker uses the Kubernetes
+API server to extract logs from the user action containers.  This operation has
+high overhead and if user actions produce non-trivial amounts of logging output
+can result in a severe performance degradation. To mitigate this, you should
+configure an alternate implementation of the LoggingProvider SPI.
+For example, you can completely disable OpenWhisk's log processing and rely
+on Kubernetes-level logs of the action containers by adding the following
+to your `mycluster.yaml`:
+```yaml
+invoker:
+  options: 
"-Dwhisk.spi.LogStoreProvider=org.apache.openwhisk.core.containerpool.logging.LogDriverLogStoreProvider"
+```
+
 The KubernetesContainerFactory can be deployed with an additional
 invokerAgent that implements container suspend/resume operations on
-behalf of a remote Invoker.  To enable this, add
+behalf of a remote Invoker. To enable this experimental configuration, add
 ```yaml
 invoker:
   containerFactory:
@@ -256,9 +276,20 @@ invoker:
 ```
 to your `mycluster.yaml`
 
-For scalability, you will probably want to use `replicaCount` to
-deploy more than one Invoker when using the KubernetesContainerFactory.
-You will also need to override the value of `whisk.containerPool.userMemory`
-to a significantly larger value when using the KubernetesContainerFactory
-to better match the overall memory available on invoker worker nodes divided by
-the number of Invokers you are creating.
+### User action container DNS
+
+If you are using the DockerContainerFactory, by default your user actions will
+not be able to connect to other Kubernetes services running in your cluster.
+To enable a more Kubernetes-native variant of the DockerContainerFactory, you
+need to configure the DNS nameservers for the user containers to use 
Kubernetes's
+DNS service.  Currently this requires you to discover the InternalIP
+used for the DNS service and record this numeric ip address in `values.yaml`.
+
+For example, if your cluster uses kube-dns, then first
+get the IP address of Kubernetes DNS server by `echo $(kubectl get svc 
kube-dns -n kube-system -o 'jsonpath={.spec.clusterIP}')`
+and then add below stanza to your `mycluster.yaml`:
+```yaml
+invoker:
+  containerFactory:
+    nameservers: "<IP_Address_Of_Kube_DNS>"
+```


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to