ptlrs commented on code in PR #10:
URL: https://github.com/apache/ozone-helm-charts/pull/10#discussion_r1832141122


##########
charts/ozone/templates/scm/scm-statefulset.yaml:
##########
@@ -31,6 +31,7 @@ metadata:
     app.kubernetes.io/component: scm
 spec:
   replicas: {{ .Values.scm.replicas }}
+  podManagementPolicy: Parallel

Review Comment:
   Is a `Parallel` policy disruptive when we use `kubectl scale` up/down. 
   It would be interesting to see if the Ratis rings are disrupted. Perhaps a 
`PreStop` hook for graceful shutdown of OM/SCM/Datanodes is required in a new 
Jira. 



##########
charts/ozone/templates/om/om-service-headless.yaml:
##########
@@ -28,6 +28,10 @@ spec:
   ports:
     - name: ui
       port: {{ .Values.om.service.port }}
+    {{- if gt (int .Values.om.replicas) 1 }}
+    - name: ratis
+      port: 9872
+    {{- end }}

Review Comment:
   Is this port being exposed for one Ozone Manager(OM) to communicate with 
another OM when they form a Ratis ring?
   My understanding is that the current `selector` will match this service with 
all OM pods. Consequently, the messages sent to this service will be forwarded 
to a random OM pod instead of a specific OM pod. 



##########
charts/ozone/templates/scm/scm-statefulset.yaml:
##########
@@ -61,6 +62,24 @@ spec:
               mountPath: {{ .Values.configuration.dir }}
             - name: {{ .Release.Name }}-scm
               mountPath: {{ .Values.scm.persistence.path }}
+        {{- if gt (int .Values.scm.replicas) 1 }}
+        - name: bootstrap
+          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | 
default .Chart.AppVersion }}"
+          args: ["ozone", "scm", "--bootstrap"]

Review Comment:
   It appears from the documentation that `init` on pod-1 needs to complete 
before `bootstrap` on pod-x. This would perhaps require changing  
`podManagementPolicy: Parallel` to `OrderedReady`.



##########
charts/ozone/templates/om/om-service-headless.yaml:
##########
@@ -28,6 +28,10 @@ spec:
   ports:
     - name: ui
       port: {{ .Values.om.service.port }}
+    {{- if gt (int .Values.om.replicas) 1 }}
+    - name: ratis
+      port: 9872
+    {{- end }}

Review Comment:
   If the number of OM pods are manually modified by `kubectl scale` then this 
port will perhaps never be exposed. We should think if there is a downside to 
always exposing the port.



##########
charts/ozone/templates/scm/scm-statefulset.yaml:
##########
@@ -61,6 +62,24 @@ spec:
               mountPath: {{ .Values.configuration.dir }}
             - name: {{ .Release.Name }}-scm
               mountPath: {{ .Values.scm.persistence.path }}
+        {{- if gt (int .Values.scm.replicas) 1 }}
+        - name: bootstrap
+          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | 
default .Chart.AppVersion }}"
+          args: ["ozone", "scm", "--bootstrap"]

Review Comment:
   From the [SCM HA](https://ozone.apache.org/docs/current/feature/scm-ha.html) 
docs:
   >The initialization of the first SCM-HA node is the same as a non-HA SCM:
   `ozone scm --init`
   Second and third nodes should be bootstrapped instead of init
   `ozone scm --bootstrap`
   
   Here, we call `init` and then `bootstrap` for every SCM pod (re)start. 
   We would instead have to perform pod-id specific actions. 
   
   It is not clear from the documentation whether `init` and `bootstrap` should 
only be performed once during the lifetime of the pod or upon every pod 
restart. 



##########
charts/ozone/templates/om/om-statefulset.yaml:
##########
@@ -31,6 +31,7 @@ metadata:
     app.kubernetes.io/component: om
 spec:
   replicas: {{ .Values.om.replicas }}
+  podManagementPolicy: Parallel

Review Comment:
   As per [OM 
documentation](https://ozone.apache.org/docs/current/feature/om-ha.html)
   >To convert a non-HA OM to be HA or to add new OM nodes to existing HA OM 
ring, new OM node(s) need to be bootstrapped.
   
   Shouldn't there be an init container which calls `--bootstrap` for OM?



##########
charts/ozone/templates/om/om-service-headless.yaml:
##########
@@ -28,6 +28,10 @@ spec:
   ports:
     - name: ui
       port: {{ .Values.om.service.port }}
+    {{- if gt (int .Values.om.replicas) 1 }}
+    - name: ratis
+      port: 9872
+    {{- end }}

Review Comment:
   As per [OM 
documentation](https://ozone.apache.org/docs/current/feature/om-ha.html)
   >This logical name is called serviceId and can be configured in the 
ozone-site.xml
   >The defined serviceId can be used instead of a single OM host using [client 
interfaces](https://ozone.apache.org/docs/current/interface.html)
   
   Perhaps there should be a different service which maps and groups pods as 
per the `serviceId`.



##########
charts/ozone/templates/scm/scm-statefulset.yaml:
##########
@@ -82,10 +101,18 @@ spec:
           ports:
             - name: rpc-client
               containerPort: 9860
+            - name: block-client
+              containerPort: 9863
             - name: rpc-datanode
               containerPort: 9861
             - name: ui
               containerPort: {{ .Values.scm.service.port }}
+            {{- if gt (int .Values.scm.replicas) 1 }}
+            - name: ratis
+              containerPort: 9894
+            - name: grpc
+              containerPort: 9895
+            {{- end }}

Review Comment:
   Since we are exposing ports, we should also have a readiness probe to toggle 
access to these ports in a separate Jira. 



##########
charts/ozone/templates/datanode/datanode-service-headless.yaml:
##########
@@ -28,6 +28,10 @@ spec:
   ports:
     - name: ui
       port: {{ .Values.datanode.service.port }}
+    - name: ratis-ipc
+      port: 9858
+    - name: ipc
+      port: 9859

Review Comment:
   Can the port numbers in all the services and statefulsets be referenced from 
the `values.yaml` file? We would like avoid hardcoding them. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to