Thanks Roger and Lusheng for your kind feedbacks.



I'll look through the links that Roger pointed. Since Holmes does not have to 
do anything on metric collecting, I think the top priority for our team is to 
handle the state of our containers properly after auto scaling.




Lusheng,




As you know, we have a virtual F2F event this week. So I'm not sure whether we 
have a chance to discuss this. Please do let me know when you are ready to 
share.




Thank you very much.




Regards,

Guangrong























原始邮件



发件人: <l...@research.att.com>;
收件人: <roger.maitl...@amdocs.com>;付光荣10144542;
抄送人: <onap-discuss@lists.onap.org>;唐鹏10114589;
日 期 :2018年02月03日 05:13
主 题 :Re: [onap-discuss] [dcae][dcaegen2][holmes] A Question on AutoScailing of 
DCAE Microservices




Roger,


 


Thanks for the pointers.


 


Guangrong,


 


The details of the Kubernetes plan for DCAE is still work-in-progress.  Here 
are some of highlights for service components.

Will support Kubernetes based scaling and resilience mechanism for dockerized 
service components.

This implies that the container/containers of a service component will be 
packaged as pod.  The resilience is expected  to be provided by Kubernetes 
cluster.

The Kubernetes based scaling support may need additional support from service 
component developer. For example, if a  service component is stateless, for 
which each instance behaves exactly the same as the next, are “scaling-ready”.  
Load can be distributed to any instance and the result would be the same.  
However, if the service component keeps states, multiple replicas  of this 
service component may have different local states if not handled carefully.  
The actual mechanism to ensure state synchronization is application dependent.  
But one typical approach is to push “states” to an external service such as a 
DB, or persistent  volume, or a distributed kv store, etc, and states are 
loaded into individual replica when needed (e.g. startup) so different replicas 
get the state view from the same copy.

In terms of multiple replicas subscribing to the same message router topic, 
there is a way to distribute the load.   That is, each replica uses the same 
“groupid” but different “userid”.  Message router will consider a message 
received by a group when it is received by any user of the group.  This way we 
can avoid the same message being delivered to multiple replicas.

Our goal is to maintain the interfaces that how a service component interacts 
with the rest of DCAE the same, e.g. how  your component gets deployed and how 
your component receives configuration updates, etc.

How the scaling trigger arrives and the actual scaling (i.e. more replicas) is 
handled by external mechanisms.  Service  components themselves do not need to 
worry about that.


 


We hope to have more details to share the next week, and set up focus meeting 
discussing more.


 


Thanks,


Lusheng


 


 



From: Roger Maitland <roger.maitl...@amdocs.com>
 Date: Friday, February 2, 2018 at 1:48 PM
 To: "fu.guangr...@zte.com.cn" <fu.guangr...@zte.com.cn>, "JI, LUSHENG 
(LUSHENG)" <l...@research.att.com>
 Cc: "onap-discuss@lists.onap.org" <onap-discuss@lists.onap.org>, 
"tang.pe...@zte.com.cn" <tang.pe...@zte.com.cn>
 Subject: RE: [onap-discuss] [dcae][dcaegen2][holmes] A Question on Auto 
Scailing of DCAE Microservices



 



Guangrong,


 


I don’t have an answer in the context of the DCAE controller but OOM/Kubernetes 
has facilities to help build a Holmes cluster in the containerized version of 
DCAE (which is  being worked on).  The cluster can be static (which is believe 
is what most projects intend for Beijing) or dynamic (the OOM team would love 
to work with you on this). Here are some links I hope you find useful:

OOM Scaling: 
https://wiki.onap.org/display/DW/Beijing+Scope#BeijingScope-Scale-clusterONAPservicestoenableseamlessscaling

K8s auto-scaling: 
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/


 


Here is a sample of how an auto-scaler is configured:


apiVersion: autoscaling/v2beta1


kind: HorizontalPodAutoscaler


metadata:


  name: php-apache


  namespace: default


spec:


  scaleTargetRef:


    apiVersion: apps/v1beta1


    kind: Deployment


    name: php-apache


  minReplicas: 1


  maxReplicas: 10


  metrics:


  - type: Resource


    resource:


      name: cpu


      targetAverageUtilization: 50


status:


  observedGeneration: 1


  lastScaleTime: <some-time>


  currentReplicas: 1


  desiredReplicas: 1


  currentMetrics:


  - type: Resource


    resource:


      name: cpu


      currentAverageUtilization: 0


      currentAverageValue: 0


 


The OOM team would be happy to work with you on implementing this.


 


Cheers,
 Roger


From: onap-discuss-boun...@lists.onap.org 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of 
fu.guangr...@zte.com.cn
 Sent: Friday, February 2, 2018 2:58 AM
 To: l...@research.att.com
 Cc: onap-discuss@lists.onap.org; tang.pe...@zte.com.cn
 Subject: [onap-discuss] [dcae][dcaegen2][holmes] A Question on Auto Scailing 
of DCAE Microservices


 

Lusheng,

 

The Holmes team is currently working on the auto scaling plans for Holmes. We 
need to confirm something with you.

 

To my understanding, the microservice should only focus on how to maintain and 
balance its data flow rather than how the docker containers/vms are scaled by 
their controller. As a DCAE application,  I think it's DCAE controller's 
responsiblity to determine when and how to scale in or scale out Holmes 
instances. Is that correct?

 

If what my understanding is correct, does DCAE have any specific requirements 
regarding collecting the status and metrics of its microservices?

 

Regards,

Guangrong

 

 

 

 



This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,


you may review at https://www.amdocs.com/about/email-disclaimer
_______________________________________________
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to