Re: [onap-discuss] OOM Beijing ("init_robot" issue)

2018-11-15 Thread gulsumatici
Dear  Vladyslav,

In the  init_robot, we are getting  the same  error
KeyError: u'aai2'

Did you   solve this problem ? 

Thanks,

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13754): https://lists.onap.org/g/onap-discuss/message/13754
Mute This Topic: https://lists.onap.org/mt/22674095/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [onap-discuss] OOM Beijing - TCA - custom eventName #oom

2018-11-14 Thread jkzcristiano
Dear Vijay,

it works! Thank you very much for the help!

Kind regards

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13739): https://lists.onap.org/g/onap-discuss/message/13739
Mute This Topic: https://lists.onap.org/mt/28134534/21656
Mute #oom: https://lists.onap.org/mk?hashtag=oom=2740164
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [onap-discuss] OOM Beijing HW requirements

2018-09-06 Thread Michal Ptacek
Thank you very much Michael, very useful link indeed

To be honest we already resigned on all-in-one deployments and doing mainly 3 
VMs deployment for Beijing ONAP,

One of the reasons was this 110 pods limit, but we had also another challenge 
for all-in-one (single VM) ONAP deployment

 

Rancher networking was collapsing in our case after some time (usually after 
24-36hrs), symptom was that when running „kubectl get cs“

there was failed etcd check, this problem disappeared when we moved rancher 
server to other node, so practically we thought that

having rancher server collocated with k8s master node is a bad idea, so I am 
wondering if there are really some working all-in-one ONAP deployments

which can run longer …. did you face the same problem ?

 

thanks again,

Michal

 

From: Michael O'Brien [mailto:frank.obr...@amdocs.com] 
Sent: Wednesday, September 5, 2018 10:26 PM
To: m.pta...@partner.samsung.com; onap-discuss@lists.onap.org
Subject: RE: [onap-discuss] OOM Beijing HW requirements

 

Michal,

   Hi, there is a procedure to increase the 110 limit here

https://lists.onap.org/g/onap-discuss/topic/oom_110_kubernetes_pod/25213556?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,25213556

   /michael

 

From: onap-discuss-boun...@lists.onap.org 
<mailto:onap-discuss-boun...@lists.onap.org>  
mailto:onap-discuss-boun...@lists.onap.org> > On Behalf Of Roger Maitland
Sent: Monday, May 7, 2018 11:38 AM
To: m.pta...@partner.samsung.com <mailto:m.pta...@partner.samsung.com> ; 
onap-discuss@lists.onap.org <mailto:onap-discuss@lists.onap.org> 
Subject: Re: [onap-discuss] OOM Beijing HW requirements

 

Hi Michal,

 

The limit of 110 pods applies per node so if you add another node (or two) to 
your cluster you’ll avoid this problem. We’ve had problems in the past with 
multi-node configurations but that seems to be behind us.  I’ll put a note in 
the documentation – thanks for reporting the problem.

 

Cheers,
Roger

 

From: mailto:onap-discuss-boun...@lists.onap.org> > on behalf of Michal Ptacek 
mailto:m.pta...@partner.samsung.com> >
Reply-To: "m.pta...@partner.samsung.com <mailto:m.pta...@partner.samsung.com> " 
mailto:m.pta...@partner.samsung.com> >
Date: Friday, May 4, 2018 at 11:08 AM
To: "onap-discuss@lists.onap.org <mailto:onap-discuss@lists.onap.org> " 
mailto:onap-discuss@lists.onap.org> >
Subject: [onap-discuss] OOM Beijing HW requirements

 

 

Hi all,

 

I am trying to deploy latest ONAP (Beijing) using OOM (master branch) and I 
found some challenges using suggested HW resources for all-in-one (rancher 
server + k8s host on single node) deployment of ONAP:

 

HW resources:

*   24 VCPU 

*128G RAM 

*500G disc space 

*rhel7.4 os 

*rancher 1.6.14 

*kubernetes 1.8.10 

*helm 2.8.2 

*kubectl 1.8.12 

*docker 17.03-ce 

 

Problem:

some random pods are unable to spawn and are on "Pending" state with error "No 
nodes are available that match all of the predicates: Insufficient pods (1).“

 

Analyzis:

it seems that mine server can allocate max. 110 pods, which I found from 
"kubectl describe nodes" output

...

Allocatable:

cpu: 24

memory:  102861656Ki

pods:110

...

full ONAP deployment with all components enabled might be 120+ pods but I did 
not get even that 110 running,

maybe some race-condition for latest pods. If I disable 5+ components in 
onap/values.yaml, it will fit into that server but then it seems that "Minimum 
Hardware Configuration" described in

 
<http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html>
 
http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html

is wrong (120GB RAM, 160G disc, 16vCPU)

 

or is there any hint how to increase that maximum number of allocatable pods ?

 

thanks,

Michal

 

 

 

 

 

 


 

 




  
<http://ext.w1.samsung.net/mail/ext/v1/external/status/update?userid=m.ptacek=bWFpbElEPTIwMTgwNTA0MDczOTMxZXVjbXMxcDM0NjgwOTJiMDllYWUzZTA4NmNjZWQ2NzgwOTM3OTE0ZiZyZWNpcGllbnRBZGRyZXNzPW9uYXAtZGlzY3Vzc0BsaXN0cy5vbmFwLm9yZw__>
 

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement, 

you may review at https://www.amdocs.com/about/email-disclaimer

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement, 

you may review at https://www.amdocs.com/about/email-disclaimer



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12262): https://lists.onap.org/g/onap-discuss/message/12262
Mute This Topic: https://lists.onap.org/mt/22460466/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [onap-discuss] OOM Beijing HW requirements

2018-09-05 Thread Michael O'Brien
Michal,
   Hi, there is a procedure to increase the 110 limit here
https://lists.onap.org/g/onap-discuss/topic/oom_110_kubernetes_pod/25213556?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,25213556
   /michael

From: onap-discuss-boun...@lists.onap.org  
On Behalf Of Roger Maitland
Sent: Monday, May 7, 2018 11:38 AM
To: m.pta...@partner.samsung.com; onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] OOM Beijing HW requirements

Hi Michal,

The limit of 110 pods applies per node so if you add another node (or two) to 
your cluster you’ll avoid this problem. We’ve had problems in the past with 
multi-node configurations but that seems to be behind us.  I’ll put a note in 
the documentation – thanks for reporting the problem.

Cheers,
Roger

From: 
mailto:onap-discuss-boun...@lists.onap.org>>
 on behalf of Michal Ptacek 
mailto:m.pta...@partner.samsung.com>>
Reply-To: "m.pta...@partner.samsung.com<mailto:m.pta...@partner.samsung.com>" 
mailto:m.pta...@partner.samsung.com>>
Date: Friday, May 4, 2018 at 11:08 AM
To: "onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>" 
mailto:onap-discuss@lists.onap.org>>
Subject: [onap-discuss] OOM Beijing HW requirements




Hi all,



I am trying to deploy latest ONAP (Beijing) using OOM (master branch) and I 
found some challenges using suggested HW resources for all-in-one (rancher 
server + k8s host on single node) deployment of ONAP:



HW resources:
·  24 VCPU
·   128G RAM
·   500G disc space
·   rhel7.4 os
·   rancher 1.6.14
·   kubernetes 1.8.10
·   helm 2.8.2
·   kubectl 1.8.12
·   docker 17.03-ce



Problem:

some random pods are unable to spawn and are on "Pending" state with error "No 
nodes are available that match all of the predicates: Insufficient pods (1).“



Analyzis:

it seems that mine server can allocate max. 110 pods, which I found from 
"kubectl describe nodes" output

...

Allocatable:

cpu: 24

memory:  102861656Ki

pods:110

...

full ONAP deployment with all components enabled might be 120+ pods but I did 
not get even that 110 running,

maybe some race-condition for latest pods. If I disable 5+ components in 
onap/values.yaml, it will fit into that server but then it seems that "Minimum 
Hardware Configuration" described in

http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html

is wrong (120GB RAM, 160G disc, 16vCPU)



or is there any hint how to increase that maximum number of allocatable pods ?



thanks,

Michal

















 [cid:20180504073931_0@eucms1p]

[http://ext.w1.samsung.net/mail/ext/v1/external/status/update?userid=m.ptacek=bWFpbElEPTIwMTgwNTA0MDczOTMxZXVjbXMxcDM0NjgwOTJiMDllYWUzZTA4NmNjZWQ2NzgwOTM3OTE0ZiZyZWNpcGllbnRBZGRyZXNzPW9uYXAtZGlzY3Vzc0BsaXN0cy5vbmFwLm9yZw__]
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12251): https://lists.onap.org/g/onap-discuss/message/12251
Mute This Topic: https://lists.onap.org/mt/22460466/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [onap-discuss] OOM Beijing ("init_robot" issue)

2018-06-25 Thread Brian
I dont know what is triggering a problem for you. The variable exists  but it 
isn’t used in OOM installs.

resources/config/eteshare/config/vm_properties.py:GLOBAL_INJECTED_AAI2_IP_ADDR 
= "N/A"
resources/config/eteshare/config/vm_properties.py:
"GLOBAL_INJECTED_AAI2_IP_ADDR" : "N/A",


The only error you should be getting on init_robot is the update to the 
index.html page fails due to the read-only file system (so use robot:robot to 
access the logs) or if you run it multiple times it complains that the default 
complex is already created.

Brian


root@rancher:~/oom/kubernetes/robot# ./demo-k8s.sh onap init_robot
+ echo 'Number of parameters:'
Number of parameters:
+ echo 2
2
+ '[' 2 -lt 2 ']'
+ NAMESPACE=onap
+ shift
+ '[' 1 -gt 0 ']'
+ key=init_robot
+ echo KEY:
KEY:
+ echo init_robot
init_robot
+ case $key in
+ TAG=UpdateWebPage
+ read -s -p 'WEB Site Password for user '\''test'\'': ' WEB_PASSWORD
WEB Site Password for user 'test': + '[' test = '' ']'
+ VARIABLES=' -v WEB_PASSWORD:test'
+ shift
+ '[' 0 -eq 2 ']'
+ shift
+ '[' 0 -gt 0 ']'
+ ETEHOME=/var/opt/OpenECOMP_ETE
+ VARIABLEFILES='-V /share/config/vm_properties.py -V 
/share/config/integration_robot_properties.py -V 
/share/config/integration_preload_parameters.py'
++ sed 's/ .*//'
++ kubectl --namespace onap get pods
++ grep robot
+ POD=dev-robot-6d6b56c5b9-wgbjr
+ kubectl --namespace onap exec dev-robot-6d6b56c5b9-wgbjr -- 
/var/opt/OpenECOMP_ETE/runTags.sh -V /share/config/vm_properties.py -V 
/share/config/integration_robot_properties.py -V 
/share/config/integration_preload_parameters.py -v WEB_PASSWORD:test -d 
/share/logs/demo/UpdateWebPage -i UpdateWebPage --display 89
Starting Xvfb on display :89 with res 1280x1024x24
Executing robot tests at log level TRACE
==
OpenECOMP ETE
==
OpenECOMP ETE.Robot
==
OpenECOMP ETE.Robot.Testsuites
==
OpenECOMP ETE.Robot.Testsuites.Update Onap Page :: Initializes ONAP Test We...
==
Update ONAP Page  | FAIL |
IOError: [Errno 30] Read-only file system: u'/etc/lighttpd/authorization'
--
OpenECOMP ETE.Robot.Testsuites.Update Onap Page :: Initializes ONA... | FAIL |
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
==
OpenECOMP ETE.Robot.Testsuites| FAIL |
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
==
OpenECOMP ETE.Robot   | FAIL |
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
==
OpenECOMP ETE | FAIL |
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
==
Output:  /share/logs/demo/UpdateWebPage/output.xml
Log: /share/logs/demo/UpdateWebPage/log.html
Report:  /share/logs/demo/UpdateWebPage/report.html
root@rancher:~/oom/kubernetes/robot#

From: onap-discuss@lists.onap.org  On Behalf Of 
Vladyslav Malynych
Sent: Monday, June 25, 2018 4:11 AM
To: onap-discuss@lists.onap.org
Subject: [onap-discuss] OOM Beijing ("init_robot" issue)


Hi all,

I am trying to run init_robot command on Beijing installed by OOM 
(./demo-k8s.sh onap init_robot), but getting an error "KeyError: u'aai2'".

According to the logs, script is using 
'/var/opt/OpenECOMP_ETE/robot/assets/templates/web/index.html.template' that is 
trying to find {aai2} variable.



As I understand this aai2 variable is onap-aai-inst2 instance in Heat-based 
installation:

onap-aai-inst1


AAI


xlarge


Ubuntu 14.04


onap-aai-inst2


AAI/UI


xlarge


Ubuntu 14.04




However in OOM Beijing deployment there is no such containers:

onap  dev-aai-5994d7f774-s6bls

onap  dev-aai-babel-d749d5db6-5w4nb

onap  dev-aai-cassandra-0

onap  dev-aai-cassandra-1

onap  dev-aai-cassandra-2

onap  dev-aai-champ-689595cfbb-62mc9

onap  dev-aai-data-router-86cfd6c95b-m29rj

onap  dev-aai-elasticsearch-548b68c46f-q74kw

onap  dev-aai-gizmo-5799b67cdf-828k6

onap  dev-aai-hbase-868f949597-5dvzx

onap  dev-aai-modelloader-54ff794977-tkr6f

onap  dev-aai-resources-d489cd699-7pjtp

onap  dev-a

[onap-discuss] OOM Beijing ("init_robot" issue)

2018-06-25 Thread Vladyslav Malynych



Hi all, 
I am trying to run init_robot command on Beijing installed by OOM (./demo-k8s.sh onap init_robot), but getting an error "KeyError: u'aai2'".
According to the logs, script is using '/var/opt/OpenECOMP_ETE/robot/assets/templates/web/index.html.template' that is trying to find {aai2} variable. 
 
As I understand this aai2 variable is onap-aai-inst2 instance in Heat-based installation:

onap-aai-inst1
AAI
xlarge
Ubuntu 14.04
onap-aai-inst2
AAI/UI
xlarge
Ubuntu 14.04

 
However in OOM Beijing deployment there is no such containers: 

onap          dev-aai-5994d7f774-s6bls
onap          dev-aai-babel-d749d5db6-5w4nb
onap          dev-aai-cassandra-0
onap          dev-aai-cassandra-1
onap          dev-aai-cassandra-2
onap          dev-aai-champ-689595cfbb-62mc9
onap          dev-aai-data-router-86cfd6c95b-m29rj
onap          dev-aai-elasticsearch-548b68c46f-q74kw
onap          dev-aai-gizmo-5799b67cdf-828k6
onap          dev-aai-hbase-868f949597-5dvzx
onap          dev-aai-modelloader-54ff794977-tkr6f 
onap          dev-aai-resources-d489cd699-7pjtp
onap          dev-aai-search-data-788dd9458b-9fmjh
onap          dev-aai-sparky-be-6f997f65f9-j6stc
onap          dev-aai-traversal-7bbc5c77d-b2vdn
 
root@install-server:~/oom/kubernetes# git status
On branch 2.0.0-ONAP
 
I'd like to ask if robot scripts are supporting OOM installation, or only Heat-based ? 
 
If somebody has an idea what I am doing wrong, I will appreciate any help. 

Thanks. 
Best regards,
Vladyslav
 
  


_._,_._,_

Links:

You receive all messages sent to this group.




View/Reply Online (#10542) |


  Reply To Sender
  
| Reply To Group
  


|


  
Mute This Topic
  

| New Topic





Your Subscription |
Contact Group Owner |

Unsubscribe

 [arch...@mail-archive.com]
_._,_._,_



[onap-discuss] OOM Beijing ("init_robot" issue)

2018-06-25 Thread Vladyslav Malynych



Hi all, 
I am trying to run init_robot command on Beijing installed by OOM (./demo-k8s.sh onap init_robot), but getting an error "KeyError: u'aai2'".
According to the logs, script is using '/var/opt/OpenECOMP_ETE/robot/assets/templates/web/index.html.template' that is trying to find {aai2} variable. 
 
As I understand this aai2 variable is onap-aai-inst2 instance in Heat-based installation:

onap-aai-inst1
AAI
xlarge
Ubuntu 14.04
onap-aai-inst2
AAI/UI
xlarge
Ubuntu 14.04

 
However in OOM Beijing deployment there is no such containers: 

onap          dev-aai-5994d7f774-s6bls
onap          dev-aai-babel-d749d5db6-5w4nb
onap          dev-aai-cassandra-0
onap          dev-aai-cassandra-1
onap          dev-aai-cassandra-2
onap          dev-aai-champ-689595cfbb-62mc9
onap          dev-aai-data-router-86cfd6c95b-m29rj
onap          dev-aai-elasticsearch-548b68c46f-q74kw
onap          dev-aai-gizmo-5799b67cdf-828k6
onap          dev-aai-hbase-868f949597-5dvzx
onap          dev-aai-modelloader-54ff794977-tkr6f 
onap          dev-aai-resources-d489cd699-7pjtp
onap          dev-aai-search-data-788dd9458b-9fmjh
onap          dev-aai-sparky-be-6f997f65f9-j6stc
onap          dev-aai-traversal-7bbc5c77d-b2vdn
 
root@install-server:~/oom/kubernetes# git status
On branch 2.0.0-ONAP
 
Are robot scripts (demo) suppored by OOM installation, or only Heat-based ?  
If somebody has an idea what I am doing wrong, I will appreciate any help. 
 
Thanks. 
Best regards,
Vladyslav
 
  


_._,_._,_

Links:

You receive all messages sent to this group.




View/Reply Online (#10543) |


  Reply To Sender
  
| Reply To Group
  


|


  
Mute This Topic
  

| New Topic





Your Subscription |
Contact Group Owner |

Unsubscribe

 [arch...@mail-archive.com]
_._,_._,_



[onap-discuss] OOM Beijing ("init_robot" issue)

2018-06-25 Thread Vladyslav Malynych



Hi all, 
I am trying to run init_robot command on Beijing installed by #OOM (./demo-k8s.sh onap init_robot), but getting an error "KeyError: u'aai2'".
According to the logs, script is using '/var/opt/OpenECOMP_ETE/robot/assets/templates/web/index.html.template' that is trying to find {aai2} variable. 
 
As I understand this aai2 variable is onap-aai-inst2 instance in Heat-based installation:

onap-aai-inst1
AAI
xlarge
Ubuntu 14.04
onap-aai-inst2
AAI/UI
xlarge
Ubuntu 14.04

 
However in OOM Beijing deployment there is no such containers: 

onap          dev-aai-5994d7f774-s6bls
onap          dev-aai-babel-d749d5db6-5w4nb
onap          dev-aai-cassandra-0
onap          dev-aai-cassandra-1
onap          dev-aai-cassandra-2
onap          dev-aai-champ-689595cfbb-62mc9
onap          dev-aai-data-router-86cfd6c95b-m29rj
onap          dev-aai-elasticsearch-548b68c46f-q74kw
onap          dev-aai-gizmo-5799b67cdf-828k6
onap          dev-aai-hbase-868f949597-5dvzx
onap          dev-aai-modelloader-54ff794977-tkr6f 
onap          dev-aai-resources-d489cd699-7pjtp
onap          dev-aai-search-data-788dd9458b-9fmjh
onap          dev-aai-sparky-be-6f997f65f9-j6stc
onap          dev-aai-traversal-7bbc5c77d-b2vdn
 
root@install-server:~/oom/kubernetes# git status
On branch 2.0.0-ONAP
 
Are robot scripts (demo) suppored by OOM installation, or only Heat-based ?  
If somebody has an idea what I am doing wrong, I will appreciate any help. 
 
Thanks. 
Best regards,
Vladyslav
 
  


_._,_._,_

Links:

You receive all messages sent to this group.




View/Reply Online (#10526) |


  Reply To Sender
  
| Reply To Group
  


|


  
Mute This Topic
  

| New Topic





Your Subscription |
Contact Group Owner |

Unsubscribe

 [arch...@mail-archive.com]
_._,_._,_



Re: [onap-discuss] OOM Beijing CPU utilization

2018-05-29 Thread Morales, Victor
Hey there,

In addition to this, there is a CPU Manager [1] component which can be included 
into Kubernetes deployment to prevent noisy neighbors. I haven’t used it but it 
seems like there is an on-going effort [2]

Regards,
Victor Morales

[1] https://github.com/intel/CPU-Manager-for-Kubernetes
[2] 
https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/cpu-manager.md


From:  on behalf of Roger Maitland 

Date: Tuesday, May 29, 2018 at 9:58 AM
To: Arash Hekmat , "GUAN, HONG" , 
Michael O'Brien , "abdelmuhaimen.sea...@orange.com" 
, "onap-discuss@lists.onap.org" 

Subject: Re: [onap-discuss] OOM Beijing CPU utilization

“Wondering if Kubernetes or Docker have a solution for this.” - Yes.  During 
the Casablanca release the OOM team (with the help of the entire ONAP 
community) will add resource limits to the Helm charts.  These limits are 
currently commented out in the charts.  Here is an example:

resources: {}
  # We usually recommend not to specify default resources and to leave this as 
a conscious
  # choice for the user. This also increases chances charts run on environments 
with little
  # resources, such as Minikube. If you do want to specify resources, uncomment 
the following
  # lines, adjust them as necessary, and remove the curly braces after 
'resources:'.
  #
  # Example:
  # Configure resource requests and limits
  # ref: http://kubernetes.io/docs/user-guide/compute-resources/
  # Minimum memory for development is 2 CPU cores and 4GB memory
  # Minimum memory for production is 4 CPU cores and 8GB memory
#resources:
#  limits:
#cpu: 2
#memory: 4Gi
#  requests:
#cpu: 2
#memory: 4Gi

Cheers,
Roger

From:  on behalf of Arash Hekmat 

Date: Friday, May 18, 2018 at 10:16 AM
To: "GUAN, HONG" , Michael O'Brien , 
"abdelmuhaimen.sea...@orange.com" , 
"onap-discuss@lists.onap.org" 
Subject: Re: [onap-discuss] OOM Beijing CPU utilization

This is a major drawback of Containerization versus Virtualization.

How a process could hog platform resources and affect everything else. 
Wondering if Kubernetes or Docker have a solution for this.

Arash

From: onap-discuss-boun...@lists.onap.org 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of GUAN, HONG
Sent: Friday, May 18, 2018 9:16 AM
To: Michael O'Brien ; abdelmuhaimen.sea...@orange.com; 
onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] OOM Beijing CPU utilization

FYI

Below are what we found out about CPU Management of Logstash. 
https://discuss.elastic.co/t/cpu-management-of-logstash/99487

Before deploy ‘log’(CPU 6%)

[centos@server-k8s-cluster-1node-kubernetes-master-host-afxat7 kubernetes]$ 
kubectl top node
NAME CPU(cores)   CPU%  
MEMORY(bytes)   MEMORY%
server-k8s-cluster-1node-kubernetes-node-host-645o52 312m 3%
12273Mi 77%
server-k8s-cluster-1node-kubernetes-node-host-s891z4 1586m19%   
4082Mi  25%
server-k8s-cluster-1node-kubernetes-node-host-6v5ip2 531m 6%
2278Mi  14%
server-k8s-cluster-1node-kubernetes-master-host-afxat7   124m 1%
2933Mi  18%
server-k8s-cluster-1node-kubernetes-node-host-vpsi6z 197m 2%
12344Mi 78%

After deploy ‘log’ (CPU 97%)
[centos@server-k8s-cluster-1node-kubernetes-master-host-afxat7 kubernetes]$ 
kubectl get pod -n onap -o wide
NAMEREADY STATUSRESTARTS   
AGE   IP   NODE
onap-appc-appc-02/2   Running   0  
15h   10.47.0.8server-k8s-cluster-1node-kubernetes-node-host-645o52
onap-appc-appc-cdt-7878d75dd8-nmhld 1/1   Running   0  
15h   10.36.0.3server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-appc-appc-db-0 2/2   Running   0  
15h   10.42.0.4server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-appc-appc-dgbuilder-989bc9898-prbzg1/1   Running   0  
15h   10.36.0.4server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-consul-consul-6d9946f754-2qv8g 1/1   Running   0  
15h   10.42.0.5server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-consul-consul-server-0 1/1   Running   0  
15h   10.36.0.5server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-consul-consul-server-1 1/1   Running   0  
15h   10.42.0.6server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-consul-consul-server-2 1/1   Running   0  
15h   10.47.0.9server-k8s-cluster-1node-kubernetes-node-host-645o52
onap-log-log-elasticsearch-f4cdbb4b8-d8kgd  1/1   Running   0  
5m10.36.0.8server-k8s-cluster-1node-kubernetes-node-hos

Re: [onap-discuss] OOM Beijing CPU utilization

2018-05-29 Thread Roger Maitland
“Wondering if Kubernetes or Docker have a solution for this.” - Yes.  During 
the Casablanca release the OOM team (with the help of the entire ONAP 
community) will add resource limits to the Helm charts.  These limits are 
currently commented out in the charts.  Here is an example:

resources: {}
  # We usually recommend not to specify default resources and to leave this as 
a conscious
  # choice for the user. This also increases chances charts run on environments 
with little
  # resources, such as Minikube. If you do want to specify resources, uncomment 
the following
  # lines, adjust them as necessary, and remove the curly braces after 
'resources:'.
  #
  # Example:
  # Configure resource requests and limits
  # ref: http://kubernetes.io/docs/user-guide/compute-resources/
  # Minimum memory for development is 2 CPU cores and 4GB memory
  # Minimum memory for production is 4 CPU cores and 8GB memory
#resources:
#  limits:
#cpu: 2
#memory: 4Gi
#  requests:
#cpu: 2
#memory: 4Gi

Cheers,
Roger

From:  on behalf of Arash Hekmat 

Date: Friday, May 18, 2018 at 10:16 AM
To: "GUAN, HONG" , Michael O'Brien , 
"abdelmuhaimen.sea...@orange.com" , 
"onap-discuss@lists.onap.org" 
Subject: Re: [onap-discuss] OOM Beijing CPU utilization

This is a major drawback of Containerization versus Virtualization.

How a process could hog platform resources and affect everything else. 
Wondering if Kubernetes or Docker have a solution for this.

Arash

From: onap-discuss-boun...@lists.onap.org 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of GUAN, HONG
Sent: Friday, May 18, 2018 9:16 AM
To: Michael O'Brien ; abdelmuhaimen.sea...@orange.com; 
onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] OOM Beijing CPU utilization

FYI

Below are what we found out about CPU Management of Logstash. 
https://discuss.elastic.co/t/cpu-management-of-logstash/99487

Before deploy ‘log’(CPU 6%)

[centos@server-k8s-cluster-1node-kubernetes-master-host-afxat7 kubernetes]$ 
kubectl top node
NAME CPU(cores)   CPU%  
MEMORY(bytes)   MEMORY%
server-k8s-cluster-1node-kubernetes-node-host-645o52 312m 3%
12273Mi 77%
server-k8s-cluster-1node-kubernetes-node-host-s891z4 1586m19%   
4082Mi  25%
server-k8s-cluster-1node-kubernetes-node-host-6v5ip2 531m 6%
2278Mi  14%
server-k8s-cluster-1node-kubernetes-master-host-afxat7   124m 1%
2933Mi  18%
server-k8s-cluster-1node-kubernetes-node-host-vpsi6z 197m 2%
12344Mi 78%

After deploy ‘log’ (CPU 97%)
[centos@server-k8s-cluster-1node-kubernetes-master-host-afxat7 kubernetes]$ 
kubectl get pod -n onap -o wide
NAMEREADY STATUSRESTARTS   
AGE   IP   NODE
onap-appc-appc-02/2   Running   0  
15h   10.47.0.8server-k8s-cluster-1node-kubernetes-node-host-645o52
onap-appc-appc-cdt-7878d75dd8-nmhld 1/1   Running   0  
15h   10.36.0.3server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-appc-appc-db-0 2/2   Running   0  
15h   10.42.0.4server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-appc-appc-dgbuilder-989bc9898-prbzg1/1   Running   0  
15h   10.36.0.4server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-consul-consul-6d9946f754-2qv8g 1/1   Running   0  
15h   10.42.0.5server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-consul-consul-server-0 1/1   Running   0  
15h   10.36.0.5server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-consul-consul-server-1 1/1   Running   0  
15h   10.42.0.6server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-consul-consul-server-2 1/1   Running   0  
15h   10.47.0.9server-k8s-cluster-1node-kubernetes-node-host-645o52
onap-log-log-elasticsearch-f4cdbb4b8-d8kgd  1/1   Running   0  
5m10.36.0.8server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-log-log-kibana-9f8768474-pps9r 1/1   Running   0  
5m10.42.0.8server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-log-log-logstash-7dd49fd4d-7vhhs   1/1   Running   0  
5m10.42.0.9server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-log-log-logstash-7dd49fd4d-l5thf   1/1   Running   0  
5m10.36.0.7server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-log-log-logstash-7dd49fd4d-sllqv   1/1   Running   0  
5m10.47.0.11   server-k8s-cluster-1node-kubernetes-node-host-645o52
onap-msb-kube2msb-69b4cfb74d-sxc47  1/1  

Re: [onap-discuss] OOM Beijing CPU utilization

2018-05-18 Thread Arash Hekmat
This is a major drawback of Containerization versus Virtualization.

How a process could hog platform resources and affect everything else. 
Wondering if Kubernetes or Docker have a solution for this.

Arash

From: onap-discuss-boun...@lists.onap.org 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of GUAN, HONG
Sent: Friday, May 18, 2018 9:16 AM
To: Michael O'Brien <frank.obr...@amdocs.com>; abdelmuhaimen.sea...@orange.com; 
onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] OOM Beijing CPU utilization

FYI

Below are what we found out about CPU Management of Logstash. 
https://discuss.elastic.co/t/cpu-management-of-logstash/99487

Before deploy 'log'(CPU 6%)

[centos@server-k8s-cluster-1node-kubernetes-master-host-afxat7 kubernetes]$ 
kubectl top node
NAME CPU(cores)   CPU%  
MEMORY(bytes)   MEMORY%
server-k8s-cluster-1node-kubernetes-node-host-645o52 312m 3%
12273Mi 77%
server-k8s-cluster-1node-kubernetes-node-host-s891z4 1586m19%   
4082Mi  25%
server-k8s-cluster-1node-kubernetes-node-host-6v5ip2 531m 6%
2278Mi  14%
server-k8s-cluster-1node-kubernetes-master-host-afxat7   124m 1%
2933Mi  18%
server-k8s-cluster-1node-kubernetes-node-host-vpsi6z 197m 2%
12344Mi 78%

After deploy 'log' (CPU 97%)
[centos@server-k8s-cluster-1node-kubernetes-master-host-afxat7 kubernetes]$ 
kubectl get pod -n onap -o wide
NAMEREADY STATUSRESTARTS   
AGE   IP   NODE
onap-appc-appc-02/2   Running   0  
15h   10.47.0.8server-k8s-cluster-1node-kubernetes-node-host-645o52
onap-appc-appc-cdt-7878d75dd8-nmhld 1/1   Running   0  
15h   10.36.0.3server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-appc-appc-db-0 2/2   Running   0  
15h   10.42.0.4server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-appc-appc-dgbuilder-989bc9898-prbzg1/1   Running   0  
15h   10.36.0.4server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-consul-consul-6d9946f754-2qv8g 1/1   Running   0  
15h   10.42.0.5server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-consul-consul-server-0 1/1   Running   0  
15h   10.36.0.5server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-consul-consul-server-1 1/1   Running   0  
15h   10.42.0.6server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-consul-consul-server-2 1/1   Running   0  
15h   10.47.0.9server-k8s-cluster-1node-kubernetes-node-host-645o52
onap-log-log-elasticsearch-f4cdbb4b8-d8kgd  1/1   Running   0  
5m10.36.0.8server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-log-log-kibana-9f8768474-pps9r 1/1   Running   0  
5m10.42.0.8server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-log-log-logstash-7dd49fd4d-7vhhs   1/1   Running   0  
5m10.42.0.9server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-log-log-logstash-7dd49fd4d-l5thf   1/1   Running   0  
5m10.36.0.7server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-log-log-logstash-7dd49fd4d-sllqv   1/1   Running   0  
5m10.47.0.11   server-k8s-cluster-1node-kubernetes-node-host-645o52
onap-msb-kube2msb-69b4cfb74d-sxc47  1/1   Running   0  
15h   10.42.0.3server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-msb-msb-consul-b946c8486-dcbm9 1/1   Running   0  
15h   10.36.0.1server-k8s-cluster-1node-kubernetes-node-host-s891z4

[centos@server-k8s-cluster-1node-kubernetes-master-host-afxat7 kubernetes]$ 
kubectl top node
NAME CPU(cores)   CPU%  
MEMORY(bytes)   MEMORY%
server-k8s-cluster-1node-kubernetes-node-host-645o52 971m 12%   
12452Mi 78%
server-k8s-cluster-1node-kubernetes-node-host-s891z4 825m 10%   
5182Mi  32%
server-k8s-cluster-1node-kubernetes-node-host-6v5ip2 7807m97%   
4354Mi  27%
server-k8s-cluster-1node-kubernetes-master-host-afxat7   158m 1%
2952Mi  18%
server-k8s-cluster-1node-kubernetes-node-host-vpsi6z 213m 2%
12461Mi 78%
[centos@server-k8s-cluster-1node-kubernetes-master-host-afxat7 kubernetes]$

Thanks,
Hong

From: 
onap-discuss-boun...@lists.onap.org<mailto:onap-discuss-boun...@lists.onap.org> 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of OBRIEN, FRANK MICHAEL
Sent: Friday, May 18, 2018 12:03 AM
To: abdel

Re: [onap-discuss] OOM Beijing CPU utilization

2018-05-18 Thread Michael O'Brien
r-host-afxat7 kubernetes]$ 
kubectl top node
NAME CPU(cores)   CPU%  
MEMORY(bytes)   MEMORY%
server-k8s-cluster-1node-kubernetes-node-host-645o52 971m 12%   
12452Mi 78%
server-k8s-cluster-1node-kubernetes-node-host-s891z4 825m 10%   
5182Mi  32%
server-k8s-cluster-1node-kubernetes-node-host-6v5ip2 7807m97%   
4354Mi  27%
server-k8s-cluster-1node-kubernetes-master-host-afxat7   158m 1%
2952Mi  18%
server-k8s-cluster-1node-kubernetes-node-host-vpsi6z 213m 2%
12461Mi 78%
[centos@server-k8s-cluster-1node-kubernetes-master-host-afxat7 kubernetes]$

Thanks,
Hong

From: 
onap-discuss-boun...@lists.onap.org<mailto:onap-discuss-boun...@lists.onap.org> 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of OBRIEN, FRANK MICHAEL
Sent: Friday, May 18, 2018 12:03 AM
To: abdelmuhaimen.sea...@orange.com<mailto:abdelmuhaimen.sea...@orange.com>; 
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: Re: [onap-discuss] OOM Beijing CPU utilization

Hi,
   I have seen this 3 times from Dec to March - tracking this nodejs issue via 
OOM-834 (not an OOM issue) - last saw it 27th March under 1.8.10 (current 
version) - but running helm 2.6.1 (current version 2.8.2)
   
https://jira.onap.org/browse/OOM-834<https://urldefense.proofpoint.com/v2/url?u=https-3A__jira.onap.org_browse_OOM-2D834=DwMFAg=LFYZ-o9_HUMeMTSQicvjIg=bUW1yd5b4djZ_J3L_jlK2A=L0JQnOxKvCvyKzAvkkzLD91rQxughYCQ5gUi3H9258c=obo4K1OVBv0H0CsRoXCG0T10rOeUddAbX9jRKXDr4nM=>

   Something in the infrastructure is causing this - as I have seen it on an 
idle kubernetes cluster (no onap pods installed)
   Will look again through the k8s jiras

   You are correct - it is not the .ru crypto miner that targets 10250/pods or 
the new one that targets a cluster without oauth lockdown
Tracking anti-crypto here

https://jira.onap.org/browse/LOG-353<https://urldefense.proofpoint.com/v2/url?u=https-3A__jira.onap.org_browse_LOG-2D353=DwMFAg=LFYZ-o9_HUMeMTSQicvjIg=bUW1yd5b4djZ_J3L_jlK2A=L0JQnOxKvCvyKzAvkkzLD91rQxughYCQ5gUi3H9258c=h__wOfz1ALTKTbE7kUVCnNWH0kWksFXAj8A7nB-eZBQ=>

I think I will ask for 5 min to go over the lockdown of clusters with the 
security subcommittee - the oauth lockdown will cover off 10249-10255 as well.

   /michael

From: 
onap-discuss-boun...@lists.onap.org<mailto:onap-discuss-boun...@lists.onap.org> 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of 
abdelmuhaimen.sea...@orange.com<mailto:abdelmuhaimen.sea...@orange.com>
Sent: Thursday, May 17, 2018 7:28 PM
To: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: [onap-discuss] OOM Beijing CPU utilization

Hi,
I have a running OOM ONAP Beijing deployment on 2 nodes.

After a few days running OK, i noticed around 100% CPU on all 16 vCPUs on the 
1st node.

I see a process nodejs running with 815% CPU as shown below.

What is this process doing ?

I checked for mining, and there's none, and I have port 10250 blocked, I don't 
see any suspicious processes.

I had to kill the nodejs process in order to regain interactivity with my onap 
deployment.

Thanks.

root@olc-oom-bjng:~# top
top - 22:58:14 up 13 days,  1:28,  1 user,  load average: 53.66, 49.26, 48.69
Tasks: 1181 total,   1 running, 1175 sleeping,   0 stopped,   5 zombie
%Cpu(s): 84.0 us, 14.9 sy,  0.1 ni,  0.4 id,  0.0 wa,  0.0 hi,  0.2 si,  0.3 st
KiB Mem : 10474657+total,  1037308 free, 8839 used, 15319272 buff/cache
KiB Swap:0 total,0 free,0 used. 14242952 avail Mem

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
20119 root  20   0 1431420  65876   1124 S 815.4  0.1  34465:04 nodjs -c 
/bin/config.json


_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be distributed, used or copied without authorisation.

If you have received this email in error, please notify the sender and delete 
this message and its attachments.

As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

Thank you.
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy stat

Re: [onap-discuss] OOM Beijing CPU utilization

2018-05-18 Thread GUAN, HONG
FYI

Below are what we found out about CPU Management of Logstash. 
https://discuss.elastic.co/t/cpu-management-of-logstash/99487

Before deploy 'log'(CPU 6%)

[centos@server-k8s-cluster-1node-kubernetes-master-host-afxat7 kubernetes]$ 
kubectl top node
NAME CPU(cores)   CPU%  
MEMORY(bytes)   MEMORY%
server-k8s-cluster-1node-kubernetes-node-host-645o52 312m 3%
12273Mi 77%
server-k8s-cluster-1node-kubernetes-node-host-s891z4 1586m19%   
4082Mi  25%
server-k8s-cluster-1node-kubernetes-node-host-6v5ip2 531m 6%
2278Mi  14%
server-k8s-cluster-1node-kubernetes-master-host-afxat7   124m 1%
2933Mi  18%
server-k8s-cluster-1node-kubernetes-node-host-vpsi6z 197m 2%
12344Mi 78%

After deploy 'log' (CPU 97%)
[centos@server-k8s-cluster-1node-kubernetes-master-host-afxat7 kubernetes]$ 
kubectl get pod -n onap -o wide
NAMEREADY STATUSRESTARTS   
AGE   IP   NODE
onap-appc-appc-02/2   Running   0  
15h   10.47.0.8server-k8s-cluster-1node-kubernetes-node-host-645o52
onap-appc-appc-cdt-7878d75dd8-nmhld 1/1   Running   0  
15h   10.36.0.3server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-appc-appc-db-0 2/2   Running   0  
15h   10.42.0.4server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-appc-appc-dgbuilder-989bc9898-prbzg1/1   Running   0  
15h   10.36.0.4server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-consul-consul-6d9946f754-2qv8g 1/1   Running   0  
15h   10.42.0.5server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-consul-consul-server-0 1/1   Running   0  
15h   10.36.0.5server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-consul-consul-server-1 1/1   Running   0  
15h   10.42.0.6server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-consul-consul-server-2 1/1   Running   0  
15h   10.47.0.9server-k8s-cluster-1node-kubernetes-node-host-645o52
onap-log-log-elasticsearch-f4cdbb4b8-d8kgd  1/1   Running   0  
5m10.36.0.8server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-log-log-kibana-9f8768474-pps9r 1/1   Running   0  
5m10.42.0.8server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-log-log-logstash-7dd49fd4d-7vhhs   1/1   Running   0  
5m10.42.0.9server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-log-log-logstash-7dd49fd4d-l5thf   1/1   Running   0  
5m10.36.0.7server-k8s-cluster-1node-kubernetes-node-host-s891z4
onap-log-log-logstash-7dd49fd4d-sllqv   1/1   Running   0  
5m10.47.0.11   server-k8s-cluster-1node-kubernetes-node-host-645o52
onap-msb-kube2msb-69b4cfb74d-sxc47  1/1   Running   0  
15h   10.42.0.3server-k8s-cluster-1node-kubernetes-node-host-6v5ip2
onap-msb-msb-consul-b946c8486-dcbm9 1/1   Running   0  
15h   10.36.0.1server-k8s-cluster-1node-kubernetes-node-host-s891z4

[centos@server-k8s-cluster-1node-kubernetes-master-host-afxat7 kubernetes]$ 
kubectl top node
NAME CPU(cores)   CPU%  
MEMORY(bytes)   MEMORY%
server-k8s-cluster-1node-kubernetes-node-host-645o52 971m 12%   
12452Mi 78%
server-k8s-cluster-1node-kubernetes-node-host-s891z4 825m 10%   
5182Mi  32%
server-k8s-cluster-1node-kubernetes-node-host-6v5ip2 7807m97%   
4354Mi  27%
server-k8s-cluster-1node-kubernetes-master-host-afxat7   158m 1%
2952Mi  18%
server-k8s-cluster-1node-kubernetes-node-host-vpsi6z 213m 2%
12461Mi 78%
[centos@server-k8s-cluster-1node-kubernetes-master-host-afxat7 kubernetes]$

Thanks,
Hong

From: onap-discuss-boun...@lists.onap.org 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of OBRIEN, FRANK MICHAEL
Sent: Friday, May 18, 2018 12:03 AM
To: abdelmuhaimen.sea...@orange.com; onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] OOM Beijing CPU utilization

Hi,
   I have seen this 3 times from Dec to March - tracking this nodejs issue via 
OOM-834 (not an OOM issue) - last saw it 27th March under 1.8.10 (current 
version) - but running helm 2.6.1 (current version 2.8.2)
   
https://jira.onap.org/browse/OOM-834<https://urldefense.proofpoint.com/v2/url?u=https-3A__jira.onap.org_browse_OOM-2D834=DwMFAg=LFYZ-o9_HUMeMTSQicvjIg=bUW1yd5b4djZ_J3L_jlK2A=L0JQnOxKvCvyKzAvkkzLD91rQxughYCQ5gUi3H92

Re: [onap-discuss] OOM Beijing CPU utilization

2018-05-17 Thread Michael O'Brien
Hi,
   I have seen this 3 times from Dec to March - tracking this nodejs issue via 
OOM-834 (not an OOM issue) - last saw it 27th March under 1.8.10 (current 
version) - but running helm 2.6.1 (current version 2.8.2)
   https://jira.onap.org/browse/OOM-834

   Something in the infrastructure is causing this - as I have seen it on an 
idle kubernetes cluster (no onap pods installed)
   Will look again through the k8s jiras

   You are correct - it is not the .ru crypto miner that targets 10250/pods or 
the new one that targets a cluster without oauth lockdown
Tracking anti-crypto here
https://jira.onap.org/browse/LOG-353

I think I will ask for 5 min to go over the lockdown of clusters with the 
security subcommittee - the oauth lockdown will cover off 10249-10255 as well.

   /michael

From: onap-discuss-boun...@lists.onap.org 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of 
abdelmuhaimen.sea...@orange.com
Sent: Thursday, May 17, 2018 7:28 PM
To: onap-discuss@lists.onap.org
Subject: [onap-discuss] OOM Beijing CPU utilization

Hi,
I have a running OOM ONAP Beijing deployment on 2 nodes.

After a few days running OK, i noticed around 100% CPU on all 16 vCPUs on the 
1st node.

I see a process nodejs running with 815% CPU as shown below.

What is this process doing ?

I checked for mining, and there's none, and I have port 10250 blocked, I don't 
see any suspicious processes.

I had to kill the nodejs process in order to regain interactivity with my onap 
deployment.

Thanks.

root@olc-oom-bjng:~# top
top - 22:58:14 up 13 days,  1:28,  1 user,  load average: 53.66, 49.26, 48.69
Tasks: 1181 total,   1 running, 1175 sleeping,   0 stopped,   5 zombie
%Cpu(s): 84.0 us, 14.9 sy,  0.1 ni,  0.4 id,  0.0 wa,  0.0 hi,  0.2 si,  0.3 st
KiB Mem : 10474657+total,  1037308 free, 8839 used, 15319272 buff/cache
KiB Swap:0 total,0 free,0 used. 14242952 avail Mem

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
20119 root  20   0 1431420  65876   1124 S 815.4  0.1  34465:04 nodjs -c 
/bin/config.json


_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be distributed, used or copied without authorisation.

If you have received this email in error, please notify the sender and delete 
this message and its attachments.

As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

Thank you.
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>
___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss


[onap-discuss] OOM Beijing CPU utilization

2018-05-17 Thread abdelmuhaimen.seaudi
Hi,
I have a running OOM ONAP Beijing deployment on 2 nodes.

After a few days running OK, i noticed around 100% CPU on all 16 vCPUs on the 
1st node.

I see a process nodejs running with 815% CPU as shown below.

What is this process doing ?

I checked for mining, and there's none, and I have port 10250 blocked, I don't 
see any suspicious processes.

I had to kill the nodejs process in order to regain interactivity with my onap 
deployment.

Thanks.

root@olc-oom-bjng:~# top
top - 22:58:14 up 13 days,  1:28,  1 user,  load average: 53.66, 49.26, 48.69
Tasks: 1181 total,   1 running, 1175 sleeping,   0 stopped,   5 zombie
%Cpu(s): 84.0 us, 14.9 sy,  0.1 ni,  0.4 id,  0.0 wa,  0.0 hi,  0.2 si,  0.3 st
KiB Mem : 10474657+total,  1037308 free, 8839 used, 15319272 buff/cache
KiB Swap:0 total,0 free,0 used. 14242952 avail Mem

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
20119 root  20   0 1431420  65876   1124 S 815.4  0.1  34465:04 nodjs -c 
/bin/config.json


_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss


Re: [onap-discuss] OOM Beijing HW requirements

2018-05-07 Thread Roger Maitland
Hi Michal,

The limit of 110 pods applies per node so if you add another node (or two) to 
your cluster you’ll avoid this problem. We’ve had problems in the past with 
multi-node configurations but that seems to be behind us.  I’ll put a note in 
the documentation – thanks for reporting the problem.

Cheers,
Roger

From: <onap-discuss-boun...@lists.onap.org> on behalf of Michal Ptacek 
<m.pta...@partner.samsung.com>
Reply-To: "m.pta...@partner.samsung.com" <m.pta...@partner.samsung.com>
Date: Friday, May 4, 2018 at 11:08 AM
To: "onap-discuss@lists.onap.org" <onap-discuss@lists.onap.org>
Subject: [onap-discuss] OOM Beijing HW requirements




Hi all,



I am trying to deploy latest ONAP (Beijing) using OOM (master branch) and I 
found some challenges using suggested HW resources for all-in-one (rancher 
server + k8s host on single node) deployment of ONAP:



HW resources:
·24 VCPU
· 128G RAM
· 500G disc space
· rhel7.4 os
· rancher 1.6.14
· kubernetes 1.8.10
· helm 2.8.2
· kubectl 1.8.12
· docker 17.03-ce



Problem:

some random pods are unable to spawn and are on "Pending" state with error "No 
nodes are available that match all of the predicates: Insufficient pods (1).“



Analyzis:

it seems that mine server can allocate max. 110 pods, which I found from 
"kubectl describe nodes" output

...

Allocatable:

cpu: 24

memory:  102861656Ki

pods:110

...

full ONAP deployment with all components enabled might be 120+ pods but I did 
not get even that 110 running,

maybe some race-condition for latest pods. If I disable 5+ components in 
onap/values.yaml, it will fit into that server but then it seems that "Minimum 
Hardware Configuration" described in

http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html

is wrong (120GB RAM, 160G disc, 16vCPU)



or is there any hint how to increase that maximum number of allocatable pods ?



thanks,

Michal

















 [cid:20180504073931_0@eucms1p]

[http://ext.w1.samsung.net/mail/ext/v1/external/status/update?userid=m.ptacek=bWFpbElEPTIwMTgwNTA0MDczOTMxZXVjbXMxcDM0NjgwOTJiMDllYWUzZTA4NmNjZWQ2NzgwOTM3OTE0ZiZyZWNpcGllbnRBZGRyZXNzPW9uYXAtZGlzY3Vzc0BsaXN0cy5vbmFwLm9yZw__]
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>
___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss


Re: [onap-discuss] OOM Beijing

2018-05-06 Thread VENKATESH KUMAR, VIJAY
Hi,
 The file - oom/kubernetes/dcaegen2/charts/dcae-bootstrap/values.yaml provides 
option to override the container version; when commented the version as set in 
blueprint (or corresponding input files) are used. The component deployment 
itself is controlled via script (bootstrap.sh) contained in K8s dcae bootstrap 
container.

Thanks,
Vijay

From: onap-discuss-boun...@lists.onap.org <onap-discuss-boun...@lists.onap.org> 
On Behalf Of abdelmuhaimen.sea...@orange.com
Sent: Sunday, May 06, 2018 11:58 AM
To: FREEMAN, BRIAN D <bf1...@att.com>; onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] OOM Beijing

Hi, i got it, it's cloudify that will spin up the TCA component.

I will check further the progress in cloudify logs.

2018-05-05 03:25:32,337 [549939a6-beac-4a52-99fb-01fa7bc93aa3] INFO: Deploying 
s9687eae2b755408282edb6af063f9b99-dcaegen2-analytics-tca, image: 
nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.tca-cdap-container:1.0.0,
 env: {'CONSUL_HOST': u'consul-server.onap', u'DMAAPSUBTOPIC': 
u'unauthenticated.SEC_MEASUREMENT_OUTPUT', u'AAIHOST': 
u'aai.onap.svc.cluster.local', u'CONSUL_PORT': u'8500', 
'CONFIG_BINDING_SERVICE': u'config_binding_service', u'CBS_HOST': 
u'config-binding-service', u'DMAAPPORT': u'3904', 'SERVICE_TAGS': 'tca', 
u'CBS_PORT': u'1', u'DMAAPHOST': u'message-router.onap', u'DMAAPPUBTOPIC': 
u'unauthenticated.DCAE_CL_OUTPUT', u'AAIPORT': u'8443'}, kwargs: {'envs': 
{u'CONSUL_HOST': u'consul-server.onap', u'DMAAPSUBTOPIC': 
u'unauthenticated.SEC_MEASUREMENT_OUTPUT', u'AAIHOST': 
u'aai.onap.svc.cluster.local', u'CONFIG_BINDING_SERVICE': 
u'config_binding_service', u'CBS_HOST': u'config-binding-service', 
u'DMAAPPORT': u'3904', 'SERVICE_TAGS': 'tca', u'CBS_PORT': u'1', 
u'DMAAPHOST': u'message-router.onap', u'CONSUL_PORT': u'8500', 
u'DMAAPPUBTOPIC': u'unauthenticated.DCAE_CL_OUTPUT', u'AAIPORT': u'8443'}, 
'log_info': {u'log_directory': u'/opt/app/TCAnalytics/logs'}, 'labels': 
{'cfydeployment': u'tca', 'cfynodeinstance': u'tca_k8s_0us7db', 'cfynode': 
u'tca_k8s'}, 'ports': [u'11011:32010'], 'volumes': []}From: 
onap-discuss-boun...@lists.onap.org<mailto:onap-discuss-boun...@lists.onap.org> 
[onap-discuss-boun...@lists.onap.org] on behalf of 
abdelmuhaimen.sea...@orange.com<mailto:abdelmuhaimen.sea...@orange.com> 
[abdelmuhaimen.sea...@orange.com]
Sent: Sunday, May 06, 2018 3:10 PM
To: FREEMAN, BRIAN D; 
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: Re: [onap-discuss] OOM Beijing
Hi,

I noticed that the TCA-CDAP and VES components are commented out in 
oom/kubernetes/dcaegen2/charts/dcae-bootstrap/values.yaml:
#  tca: 
nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.tca-cdap-container.tca-cdap-container:1.0.0
#  ves: 
nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1-latest


What is the reason for that ?

If I comment out those lines, how can i deploy the new components in OOM 
Beijing ?

Thanks

A. Seaudi


From: FREEMAN, BRIAN D [bf1...@att.com]
Sent: Saturday, May 05, 2018 5:01 PM
To: SEAUDI Abdelmuhaimen OBS/CSO; 
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: RE: OOM Beijing
Update your vm_properties.py in the robot helm charts and do a make robot, make 
onap, helm upgrade etc

https://gerrit.onap.org/r/#/c/44399/<https://urldefense.proofpoint.com/v2/url?u=https-3A__gerrit.onap.org_r_-23_c_44399_=DwMF-g=LFYZ-o9_HUMeMTSQicvjIg=6WYcUG7NY-ZxfqWx5MmzVQ=1XuPS-LH1rXDdxZNha7EmP5UAhFW1A6FURnmqUsBeWA=fMU8w7q8KOYlC7e3ZInz1LkVxPffOArc6XOGMevSk8Q=>


the DMaaP issue is a bigger issue for you. OOF is for homing assignments for 
VNFs and the OOM config changes havent been merged yet.
The error simply means the variable hasn't been defined in the OOM version of 
vm_properties.py. The HEAT install based version handles the creation/update of 
vm_properties.py differently since IP addresses can be used in the HEAT install 
but the K8 install needs the internal K8 service names.

Brain


From: 
onap-discuss-boun...@lists.onap.org<mailto:onap-discuss-boun...@lists.onap.org> 
<onap-discuss-boun...@lists.onap.org<mailto:onap-discuss-boun...@lists.onap.org>>
 On Behalf Of 
abdelmuhaimen.sea...@orange.com<mailto:abdelmuhaimen.sea...@orange.com>
Sent: Saturday, May 05, 2018 9:18 AM
To: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: [onap-discuss] OOM Beijing

Hi,

I was trying to study CDAP/DCAE using OOM Amsterdam, but i faced some issues, 
and i read in a recent mail on the mailgroup that this method is not supported, 
and that the supported method now in Beijing is to have DCAE containerized, 
instead of the Heat deployment from OOM Amsterdam.

I finished the installation of OOM Beijing, and after I tried the etehealth 
robot script it gave the following output.

Is this normal, what does the OOF_HOMING_ENDPOINT error mean ?

Also what is the err

Re: [onap-discuss] OOM Beijing

2018-05-06 Thread abdelmuhaimen.seaudi
Hi, i got it, it's cloudify that will spin up the TCA component.

I will check further the progress in cloudify logs.

2018-05-05 03:25:32,337 [549939a6-beac-4a52-99fb-01fa7bc93aa3] INFO: Deploying 
s9687eae2b755408282edb6af063f9b99-dcaegen2-analytics-tca, image: 
nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.tca-cdap-container:1.0.0,
 env: {'CONSUL_HOST': u'consul-server.onap', u'DMAAPSUBTOPIC': 
u'unauthenticated.SEC_MEASUREMENT_OUTPUT', u'AAIHOST': 
u'aai.onap.svc.cluster.local', u'CONSUL_PORT': u'8500', 
'CONFIG_BINDING_SERVICE': u'config_binding_service', u'CBS_HOST': 
u'config-binding-service', u'DMAAPPORT': u'3904', 'SERVICE_TAGS': 'tca', 
u'CBS_PORT': u'1', u'DMAAPHOST': u'message-router.onap', u'DMAAPPUBTOPIC': 
u'unauthenticated.DCAE_CL_OUTPUT', u'AAIPORT': u'8443'}, kwargs: {'envs': 
{u'CONSUL_HOST': u'consul-server.onap', u'DMAAPSUBTOPIC': 
u'unauthenticated.SEC_MEASUREMENT_OUTPUT', u'AAIHOST': 
u'aai.onap.svc.cluster.local', u'CONFIG_BINDING_SERVICE': 
u'config_binding_service', u'CBS_HOST': u'config-binding-service', 
u'DMAAPPORT': u'3904', 'SERVICE_TAGS': 'tca', u'CBS_PORT': u'1', 
u'DMAAPHOST': u'message-router.onap', u'CONSUL_PORT': u'8500', 
u'DMAAPPUBTOPIC': u'unauthenticated.DCAE_CL_OUTPUT', u'AAIPORT': u'8443'}, 
'log_info': {u'log_directory': u'/opt/app/TCAnalytics/logs'}, 'labels': 
{'cfydeployment': u'tca', 'cfynodeinstance': u'tca_k8s_0us7db', 'cfynode': 
u'tca_k8s'}, 'ports': [u'11011:32010'], 'volumes': []}From: 
onap-discuss-boun...@lists.onap.org [onap-discuss-boun...@lists.onap.org] on 
behalf of abdelmuhaimen.sea...@orange.com [abdelmuhaimen.sea...@orange.com]
Sent: Sunday, May 06, 2018 3:10 PM
To: FREEMAN, BRIAN D; onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] OOM Beijing

Hi,

I noticed that the TCA-CDAP and VES components are commented out in 
oom/kubernetes/dcaegen2/charts/dcae-bootstrap/values.yaml:
#  tca: 
nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.tca-cdap-container.tca-cdap-container:1.0.0
#  ves: 
nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1-latest


What is the reason for that ?

If I comment out those lines, how can i deploy the new components in OOM 
Beijing ?

Thanks

A. Seaudi


From: FREEMAN, BRIAN D [bf1...@att.com]
Sent: Saturday, May 05, 2018 5:01 PM
To: SEAUDI Abdelmuhaimen OBS/CSO; onap-discuss@lists.onap.org
Subject: RE: OOM Beijing

Update your vm_properties.py in the robot helm charts and do a make robot, make 
onap, helm upgrade etc

https://gerrit.onap.org/r/#/c/44399/


the DMaaP issue is a bigger issue for you. OOF is for homing assignments for 
VNFs and the OOM config changes havent been merged yet.
The error simply means the variable hasn’t been defined in the OOM version of 
vm_properties.py. The HEAT install based version handles the creation/update of 
vm_properties.py differently since IP addresses can be used in the HEAT install 
but the K8 install needs the internal K8 service names.

Brain


From: onap-discuss-boun...@lists.onap.org <onap-discuss-boun...@lists.onap.org> 
On Behalf Of abdelmuhaimen.sea...@orange.com
Sent: Saturday, May 05, 2018 9:18 AM
To: onap-discuss@lists.onap.org
Subject: [onap-discuss] OOM Beijing

Hi,

I was trying to study CDAP/DCAE using OOM Amsterdam, but i faced some issues, 
and i read in a recent mail on the mailgroup that this method is not supported, 
and that the supported method now in Beijing is to have DCAE containerized, 
instead of the Heat deployment from OOM Amsterdam.

I finished the installation of OOM Beijing, and after I tried the etehealth 
robot script it gave the following output.

Is this normal, what does the OOF_HOMING_ENDPOINT error mean ?

Also what is the error for NBI mean ?

Do I need to update the kube2msb token like Amsterdam release, or is this no 
longer needed ?




root@olc-oom-bjng:~/oom/kubernetes/robot# ./ete-k8s.sh onap health
Starting Xvfb on display :88 with res 1280x1024x24
Executing robot tests at log level TRACE
==
OpenECOMP ETE
==
OpenECOMP ETE.Robot
==
OpenECOMP ETE.Robot.Testsuites
==
[ ERROR ] Error in file 
'/var/opt/OpenECOMP_ETE/robot/resources/oof_interface.robot': Setting variable 
'${OOF_HOMING_ENDPOINT}' failed: Variable '${GLOBAL_OOF_SERVER_PROTOCOL}' not 
found. Did you mean:
${GLOBAL_MSO_SERVER_PROTOCOL}
${GLOBAL_LOG_SERVER_PROTOCOL}
${GLOBAL_MR_SERVER_PROTOCOL}
${GLOBAL_VID_SERVER_PROTOCOL}
${GLOBAL_NBI_SERVER_PROTOCOL}
${GLOBAL_MSB_SERVER_PROTOCOL}
${GLOBAL_CLI_SERVER_PROTOCOL}
${GLOBAL_AAI_SERVER_PROTOCOL}
${GLOBAL_VNFSDK_SERVER_PROTOCOL}
${GLOBAL_PORTAL_SERVER_PROTOCOL}
[ ERROR ] Error in file 
'/v

Re: [onap-discuss] OOM Beijing

2018-05-06 Thread abdelmuhaimen.seaudi
Hi,

I noticed that the TCA-CDAP and VES components are commented out in 
oom/kubernetes/dcaegen2/charts/dcae-bootstrap/values.yaml:
#  tca: 
nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.tca-cdap-container.tca-cdap-container:1.0.0
#  ves: 
nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1-latest


What is the reason for that ?

If I comment out those lines, how can i deploy the new components in OOM 
Beijing ?

Thanks

A. Seaudi


From: FREEMAN, BRIAN D [bf1...@att.com]
Sent: Saturday, May 05, 2018 5:01 PM
To: SEAUDI Abdelmuhaimen OBS/CSO; onap-discuss@lists.onap.org
Subject: RE: OOM Beijing

Update your vm_properties.py in the robot helm charts and do a make robot, make 
onap, helm upgrade etc

https://gerrit.onap.org/r/#/c/44399/


the DMaaP issue is a bigger issue for you. OOF is for homing assignments for 
VNFs and the OOM config changes havent been merged yet.
The error simply means the variable hasn’t been defined in the OOM version of 
vm_properties.py. The HEAT install based version handles the creation/update of 
vm_properties.py differently since IP addresses can be used in the HEAT install 
but the K8 install needs the internal K8 service names.

Brain


From: onap-discuss-boun...@lists.onap.org <onap-discuss-boun...@lists.onap.org> 
On Behalf Of abdelmuhaimen.sea...@orange.com
Sent: Saturday, May 05, 2018 9:18 AM
To: onap-discuss@lists.onap.org
Subject: [onap-discuss] OOM Beijing

Hi,

I was trying to study CDAP/DCAE using OOM Amsterdam, but i faced some issues, 
and i read in a recent mail on the mailgroup that this method is not supported, 
and that the supported method now in Beijing is to have DCAE containerized, 
instead of the Heat deployment from OOM Amsterdam.

I finished the installation of OOM Beijing, and after I tried the etehealth 
robot script it gave the following output.

Is this normal, what does the OOF_HOMING_ENDPOINT error mean ?

Also what is the error for NBI mean ?

Do I need to update the kube2msb token like Amsterdam release, or is this no 
longer needed ?




root@olc-oom-bjng:~/oom/kubernetes/robot# ./ete-k8s.sh onap health
Starting Xvfb on display :88 with res 1280x1024x24
Executing robot tests at log level TRACE
==
OpenECOMP ETE
==
OpenECOMP ETE.Robot
==
OpenECOMP ETE.Robot.Testsuites
==
[ ERROR ] Error in file 
'/var/opt/OpenECOMP_ETE/robot/resources/oof_interface.robot': Setting variable 
'${OOF_HOMING_ENDPOINT}' failed: Variable '${GLOBAL_OOF_SERVER_PROTOCOL}' not 
found. Did you mean:
${GLOBAL_MSO_SERVER_PROTOCOL}
${GLOBAL_LOG_SERVER_PROTOCOL}
${GLOBAL_MR_SERVER_PROTOCOL}
${GLOBAL_VID_SERVER_PROTOCOL}
${GLOBAL_NBI_SERVER_PROTOCOL}
${GLOBAL_MSB_SERVER_PROTOCOL}
${GLOBAL_CLI_SERVER_PROTOCOL}
${GLOBAL_AAI_SERVER_PROTOCOL}
${GLOBAL_VNFSDK_SERVER_PROTOCOL}
${GLOBAL_PORTAL_SERVER_PROTOCOL}
[ ERROR ] Error in file 
'/var/opt/OpenECOMP_ETE/robot/resources/oof_interface.robot': Setting variable 
'${OOF_SNIRO_ENDPOINT}' failed: Variable '${GLOBAL_OOF_SERVER_PROTOCOL}' not 
found. Did you mean:
${GLOBAL_MSO_SERVER_PROTOCOL}
${GLOBAL_LOG_SERVER_PROTOCOL}
${GLOBAL_MR_SERVER_PROTOCOL}
${GLOBAL_VID_SERVER_PROTOCOL}
${GLOBAL_NBI_SERVER_PROTOCOL}
${GLOBAL_MSB_SERVER_PROTOCOL}
${GLOBAL_CLI_SERVER_PROTOCOL}
${GLOBAL_AAI_SERVER_PROTOCOL}
${GLOBAL_VNFSDK_SERVER_PROTOCOL}
${GLOBAL_PORTAL_SERVER_PROTOCOL}
OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp components are...
==
Basic A Health Check   | PASS |
--
Basic AAF Health Check[ WARN ] 
Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) 
after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or 
service not known',)': 
/authz/perms/user/d...@openecomp.org<mailto:/authz/perms/user/d...@openecomp.org>
[ WARN ] Retrying (Retry(total=1, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or 
service not known',)': 
/authz/perms/user/d...@openecomp.org<mailto:/authz/perms/user/d...@openecomp.org>
[ WARN ] Retrying (Retry(total=0, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or 
service not known',)': 
/authz/perms/user/

Re: [onap-discuss] OOM Beijing

2018-05-05 Thread FREEMAN, BRIAN D
Update your vm_properties.py in the robot helm charts and do a make robot, make 
onap, helm upgrade etc

https://gerrit.onap.org/r/#/c/44399/


the DMaaP issue is a bigger issue for you. OOF is for homing assignments for 
VNFs and the OOM config changes havent been merged yet.
The error simply means the variable hasn't been defined in the OOM version of 
vm_properties.py. The HEAT install based version handles the creation/update of 
vm_properties.py differently since IP addresses can be used in the HEAT install 
but the K8 install needs the internal K8 service names.

Brain


From: onap-discuss-boun...@lists.onap.org <onap-discuss-boun...@lists.onap.org> 
On Behalf Of abdelmuhaimen.sea...@orange.com
Sent: Saturday, May 05, 2018 9:18 AM
To: onap-discuss@lists.onap.org
Subject: [onap-discuss] OOM Beijing

Hi,

I was trying to study CDAP/DCAE using OOM Amsterdam, but i faced some issues, 
and i read in a recent mail on the mailgroup that this method is not supported, 
and that the supported method now in Beijing is to have DCAE containerized, 
instead of the Heat deployment from OOM Amsterdam.

I finished the installation of OOM Beijing, and after I tried the etehealth 
robot script it gave the following output.

Is this normal, what does the OOF_HOMING_ENDPOINT error mean ?

Also what is the error for NBI mean ?

Do I need to update the kube2msb token like Amsterdam release, or is this no 
longer needed ?




root@olc-oom-bjng:~/oom/kubernetes/robot# ./ete-k8s.sh onap health
Starting Xvfb on display :88 with res 1280x1024x24
Executing robot tests at log level TRACE
==
OpenECOMP ETE
==
OpenECOMP ETE.Robot
==
OpenECOMP ETE.Robot.Testsuites
==
[ ERROR ] Error in file 
'/var/opt/OpenECOMP_ETE/robot/resources/oof_interface.robot': Setting variable 
'${OOF_HOMING_ENDPOINT}' failed: Variable '${GLOBAL_OOF_SERVER_PROTOCOL}' not 
found. Did you mean:
${GLOBAL_MSO_SERVER_PROTOCOL}
${GLOBAL_LOG_SERVER_PROTOCOL}
${GLOBAL_MR_SERVER_PROTOCOL}
${GLOBAL_VID_SERVER_PROTOCOL}
${GLOBAL_NBI_SERVER_PROTOCOL}
${GLOBAL_MSB_SERVER_PROTOCOL}
${GLOBAL_CLI_SERVER_PROTOCOL}
${GLOBAL_AAI_SERVER_PROTOCOL}
${GLOBAL_VNFSDK_SERVER_PROTOCOL}
${GLOBAL_PORTAL_SERVER_PROTOCOL}
[ ERROR ] Error in file 
'/var/opt/OpenECOMP_ETE/robot/resources/oof_interface.robot': Setting variable 
'${OOF_SNIRO_ENDPOINT}' failed: Variable '${GLOBAL_OOF_SERVER_PROTOCOL}' not 
found. Did you mean:
${GLOBAL_MSO_SERVER_PROTOCOL}
${GLOBAL_LOG_SERVER_PROTOCOL}
${GLOBAL_MR_SERVER_PROTOCOL}
${GLOBAL_VID_SERVER_PROTOCOL}
${GLOBAL_NBI_SERVER_PROTOCOL}
${GLOBAL_MSB_SERVER_PROTOCOL}
${GLOBAL_CLI_SERVER_PROTOCOL}
${GLOBAL_AAI_SERVER_PROTOCOL}
${GLOBAL_VNFSDK_SERVER_PROTOCOL}
${GLOBAL_PORTAL_SERVER_PROTOCOL}
OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp components are...
==
Basic A Health Check   | PASS |
--
Basic AAF Health Check[ WARN ] 
Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) 
after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or 
service not known',)': 
/authz/perms/user/d...@openecomp.org<mailto:/authz/perms/user/d...@openecomp.org>
[ WARN ] Retrying (Retry(total=1, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or 
service not known',)': 
/authz/perms/user/d...@openecomp.org<mailto:/authz/perms/user/d...@openecomp.org>
[ WARN ] Retrying (Retry(total=0, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or 
service not known',)': 
/authz/perms/user/d...@openecomp.org<mailto:/authz/perms/user/d...@openecomp.org>
| FAIL |
ConnectionError: HTTPConnectionPool(host='aaf.onap', port=8101): Max retries 
exceeded with url: 
/authz/perms/user/d...@openecomp.org<mailto:/authz/perms/user/d...@openecomp.org>
 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or 
service not known',))
--
Basic APPC Health Check   | PASS |
--
Basic CLI Health Check

[onap-discuss] OOM Beijing

2018-05-05 Thread abdelmuhaimen.seaudi
Hi,

I was trying to study CDAP/DCAE using OOM Amsterdam, but i faced some issues, 
and i read in a recent mail on the mailgroup that this method is not supported, 
and that the supported method now in Beijing is to have DCAE containerized, 
instead of the Heat deployment from OOM Amsterdam.

I finished the installation of OOM Beijing, and after I tried the etehealth 
robot script it gave the following output.

Is this normal, what does the OOF_HOMING_ENDPOINT error mean ?

Also what is the error for NBI mean ?

Do I need to update the kube2msb token like Amsterdam release, or is this no 
longer needed ?




root@olc-oom-bjng:~/oom/kubernetes/robot# ./ete-k8s.sh onap health
Starting Xvfb on display :88 with res 1280x1024x24
Executing robot tests at log level TRACE
==
OpenECOMP ETE
==
OpenECOMP ETE.Robot
==
OpenECOMP ETE.Robot.Testsuites
==
[ ERROR ] Error in file 
'/var/opt/OpenECOMP_ETE/robot/resources/oof_interface.robot': Setting variable 
'${OOF_HOMING_ENDPOINT}' failed: Variable '${GLOBAL_OOF_SERVER_PROTOCOL}' not 
found. Did you mean:
${GLOBAL_MSO_SERVER_PROTOCOL}
${GLOBAL_LOG_SERVER_PROTOCOL}
${GLOBAL_MR_SERVER_PROTOCOL}
${GLOBAL_VID_SERVER_PROTOCOL}
${GLOBAL_NBI_SERVER_PROTOCOL}
${GLOBAL_MSB_SERVER_PROTOCOL}
${GLOBAL_CLI_SERVER_PROTOCOL}
${GLOBAL_AAI_SERVER_PROTOCOL}
${GLOBAL_VNFSDK_SERVER_PROTOCOL}
${GLOBAL_PORTAL_SERVER_PROTOCOL}
[ ERROR ] Error in file 
'/var/opt/OpenECOMP_ETE/robot/resources/oof_interface.robot': Setting variable 
'${OOF_SNIRO_ENDPOINT}' failed: Variable '${GLOBAL_OOF_SERVER_PROTOCOL}' not 
found. Did you mean:
${GLOBAL_MSO_SERVER_PROTOCOL}
${GLOBAL_LOG_SERVER_PROTOCOL}
${GLOBAL_MR_SERVER_PROTOCOL}
${GLOBAL_VID_SERVER_PROTOCOL}
${GLOBAL_NBI_SERVER_PROTOCOL}
${GLOBAL_MSB_SERVER_PROTOCOL}
${GLOBAL_CLI_SERVER_PROTOCOL}
${GLOBAL_AAI_SERVER_PROTOCOL}
${GLOBAL_VNFSDK_SERVER_PROTOCOL}
${GLOBAL_PORTAL_SERVER_PROTOCOL}
OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp components are...
==
Basic A Health Check   | PASS |
--
Basic AAF Health Check[ WARN ] 
Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) 
after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or 
service not known',)': /authz/perms/user/d...@openecomp.org
[ WARN ] Retrying (Retry(total=1, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or 
service not known',)': /authz/perms/user/d...@openecomp.org
[ WARN ] Retrying (Retry(total=0, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or 
service not known',)': /authz/perms/user/d...@openecomp.org
| FAIL |
ConnectionError: HTTPConnectionPool(host='aaf.onap', port=8101): Max retries 
exceeded with url: /authz/perms/user/d...@openecomp.org (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or 
service not known',))
--
Basic APPC Health Check   | PASS |
--
Basic CLI Health Check| PASS |
--
Basic CLAMP Health Check  | PASS |
--
Basic DCAE Health Check   | PASS |
--
Basic DMAAP Message Router Health Check   | PASS |
--
[ WARN ] Retrying (Retry(total=2, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or 
service not known',)': /nbi/api/v1/status
Basic External API NBI Health Check   [ WARN ] 
Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) 
after connection broken by 
'NewConnectionError(': Failed 

[onap-discuss] OOM Beijing HW requirements

2018-05-04 Thread Michal Ptacek



 
Hi all,
 
I am trying to deploy latest ONAP (Beijing) using OOM (master branch) and I found some challenges using suggested HW resources for all-in-one (rancher server + k8s host on single node) deployment of ONAP:
 
HW resources:
   24 VCPU
128G RAM
500G disc space
rhel7.4 os
rancher 1.6.14
kubernetes 1.8.10
helm 2.8.2
kubectl 1.8.12
docker 17.03-ce
 
Problem:
some random pods are unable to spawn and are on "Pending" state with error "No nodes are available that match all of the
predicates: Insufficient pods (1).“
 
Analyzis:
it seems that mine server can allocate max. 110 pods, which I found from "kubectl describe nodes" output
...
Allocatable:
cpu: 24
memory:  102861656Ki
pods:    110
...
full ONAP deployment with all components enabled might be 120+ pods but I did not get even that 110 running,
maybe some race-condition for latest pods. If I disable 5+ components in onap/values.yaml, it will fit into that server but then it seems that "Minimum Hardware Configuration" described in
http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html
is wrong (120GB RAM, 160G disc, 16vCPU)
 
or is there any hint how to increase that maximum number of allocatable pods ?
 
thanks,
Michal
 
 
 
 
 
 
 
  

___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss


[onap-discuss] OOM Beijing HW requirements

2018-05-04 Thread Michal Ptacek



Hi all,
 
I am trying to deploy latest ONAP (Beijing) using OOM (master branch) and I found some challenges using suggested HW resources for all-in-one (rancher server + k8s host on single node) deployment of ONAP:
 
HW resources:
   24 VCPU
128G RAM
500G disc space
rhel7.4 os
rancher 1.6.14
kubernetes 1.8.10
helm 2.8.2
kubectl 1.8.12
docker 17.03-ce
 
Problem:
some random pods are unable to spawn and are on "Pending" state with error "No nodes are available that match all of the
predicates: Insufficient pods (1).“
 
Analyzis:
it seems that mine server can allocate max. 110 pods, which I found from "kubectl describe nodes" output
...
Allocatable:
cpu: 24
memory:  102861656Ki
pods:    110
...
full ONAP deployment with all components enabled might be 120+ pods but I did not get even that 110 running,
maybe some race-condition for latest pods. If I disable 5+ components in onap/values.yaml, it will fit into that server but then it seems that "Minimum Hardware Configuration" described in
http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html
is wrong (120GB RAM, 160G disc, 16vCPU)
 
or is there any hint how to increase that maximum number of allocatable pods ?
 
thanks,
Michal





 
  

___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss


[onap-discuss] OOM Beijing HW requirements

2018-05-04 Thread Michal Ptacek



Hi all,
 
I am trying to deploy latest ONAP (Beijing) using OOM (master branch) and I found some challenges using suggested HW resources for all-in-one (rancher server + k8s host on single node) deployment of ONAP:
 
HW resources:
   24 VCPU
128G RAM
500G disc space
rhel7.4 os
rancher 1.6.14
kubernetes 1.8.10
helm 2.8.2
kubectl 1.8.12
docker 17.03-ce
 
Problem:
some random pods are unable to spawn and are on "Pending" state with error "No nodes are available that match all of the
predicates: Insufficient pods (1).“
 
Analyzis:
it seems that mine server can allocate max. 110 pods, which I found from "kubectl describe nodes" output
...
Allocatable:
cpu: 24
memory:  102861656Ki
pods:    110
...
full ONAP deployment with all components enabled might be 120+ pods but I did not get even that 110 running,
maybe some race-condition for latest pods. If I disable 5+ components in onap/values.yaml, it will fit into that server but then it seems that "Minimum Hardware Configuration" described in
http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html
is wrong (120GB RAM, 160G disc, 16vCPU)
 
or is there any hint how to increase that maximum number of allocatable pods ?
 
thanks,
Michal
 
 
 
 
 
 
 
 
  

___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss


Re: [onap-discuss] [oom] beijing guidelines ?

2018-03-26 Thread Roger Maitland
Hi Kanagaraj,

A new set of 
documentation<https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=docs;h=6e65f3ff6104fcb8d69d833e7b757abef6269121;hb=refs/heads/master>
 is now in oom master branch but has not been published to 
readthedocs<http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/OOM%20Project%20Description/oom_project_description.html>
 yet. I’m investigating why and will hopefully have the issue resolved quickly.

Cheers,
Roger

From: <onap-discuss-boun...@lists.onap.org> on behalf of Kanagaraj Manickam 
<kanagaraj.manic...@huawei.com>
Date: Monday, March 26, 2018 at 7:12 AM
To: "onap-discuss@lists.onap.org" <onap-discuss@lists.onap.org>
Subject: [onap-discuss] [oom] beijing guidelines ?

Dear OOM team,

I have found in the wiki for Amsterdam OOM guidelines, but for Beijing not.
Could you please help to find the guidelines for installing ONAP using OOM with 
Beijing version? Thank you.


Regards
Kanagaraj M
-
Be transparent! Win together !!

本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from HUAWEI, 
which is intended only for the person  or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not   
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is  prohibited. If you receive 
this e-mail in error, please notify the sender by phone or email immediately 
and delete it!



This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>
___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss


[onap-discuss] [oom] beijing guidelines ?

2018-03-26 Thread Kanagaraj Manickam
Dear OOM team,

I have found in the wiki for Amsterdam OOM guidelines, but for Beijing not.
Could you please help to find the guidelines for installing ONAP using OOM with 
Beijing version? Thank you.


Regards
Kanagaraj M
-
Be transparent! Win together !!

本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from HUAWEI, 
which is intended only for the person  or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not   
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is  prohibited. If you receive 
this e-mail in error, please notify the sender by phone or email immediately 
and delete it!


___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss