Thanks for the pointers.
The details of the Kubernetes plan for DCAE is still work-in-progress. Here
are some of highlights for service components.
1. Will support Kubernetes based scaling and resilience mechanism for
dockerized service components.
* This implies that the container/containers of a service component will
be packaged as pod. The resilience is expected to be provided by Kubernetes
* The Kubernetes based scaling support may need additional support from
service component developer. For example, if a service component is stateless,
for which each instance behaves exactly the same as the next, are
“scaling-ready”. Load can be distributed to any instance and the result would
be the same. However, if the service component keeps states, multiple replicas
of this service component may have different local states if not handled
carefully. The actual mechanism to ensure state synchronization is application
dependent. But one typical approach is to push “states” to an external service
such as a DB, or persistent volume, or a distributed kv store, etc, and states
are loaded into individual replica when needed (e.g. startup) so different
replicas get the state view from the same copy.
* In terms of multiple replicas subscribing to the same message router
topic, there is a way to distribute the load. That is, each replica uses the
same “groupid” but different “userid”. Message router will consider a message
received by a group when it is received by any user of the group. This way we
can avoid the same message being delivered to multiple replicas.
2. Our goal is to maintain the interfaces that how a service component
interacts with the rest of DCAE the same, e.g. how your component gets deployed
and how your component receives configuration updates, etc.
3. How the scaling trigger arrives and the actual scaling (i.e. more
replicas) is handled by external mechanisms. Service components themselves do
not need to worry about that.
We hope to have more details to share the next week, and set up focus meeting
From: Roger Maitland <roger.maitl...@amdocs.com>
Date: Friday, February 2, 2018 at 1:48 PM
To: "fu.guangr...@zte.com.cn" <fu.guangr...@zte.com.cn>, "JI, LUSHENG
Cc: "email@example.com" <firstname.lastname@example.org>,
Subject: RE: [onap-discuss] [dcae][dcaegen2][holmes] A Question on Auto
Scailing of DCAE Microservices
I don’t have an answer in the context of the DCAE controller but OOM/Kubernetes
has facilities to help build a Holmes cluster in the containerized version of
DCAE (which is being worked on). The cluster can be static (which is believe
is what most projects intend for Beijing) or dynamic (the OOM team would love
to work with you on this). Here are some links I hope you find useful:
* OOM Scaling:
* K8s auto-scaling:
Here is a sample of how an auto-scaler is configured:
- type: Resource
- type: Resource
The OOM team would be happy to work with you on implementing this.
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of
Sent: Friday, February 2, 2018 2:58 AM
Cc: email@example.com; tang.pe...@zte.com.cn
Subject: [onap-discuss] [dcae][dcaegen2][holmes] A Question on Auto Scailing of
The Holmes team is currently working on the auto scaling plans for Holmes. We
need to confirm something with you.
To my understanding, the microservice should only focus on how to maintain and
balance its data flow rather than how the docker containers/vms are scaled by
their controller. As a DCAE application, I think it's DCAE controller's
responsiblity to determine when and how to scale in or scale out Holmes
instances. Is that correct?
If what my understanding is correct, does DCAE have any specific requirements
regarding collecting the status and metrics of its microservices?
This message and the information contained herein is proprietary and
confidential and subject to the Amdocs policy statement,
you may review at
onap-discuss mailing list