Thanks Michael .

a. For Kubectl - we ensured that version 1.7.0 is installed because this is the 
version that we have in our kubernetes cluster .

                       but we are still struggling a little bit with Kubectl 
configuration . In terms of what should work where .


root@localhost:~# kubectl cluster-info
Kubernetes master is running at https://10.110.208.207:443/
dnsmasq is running at 
https://10.110.208.207:443//api/v1/namespaces/kube-system/services/dnsmasq/proxy
kubedns is running at 
https://10.110.208.207:443//api/v1/namespaces/kube-system/services/kubedns/proxy

b For mixup of dockers for esr - We raised JIRA Ticket  
https://jira.onap.org/browse/OOM-493 .


this is the  understanding that we have  --
1  kubernetes cluster ,
2  install Kubectl ,
3.  install  Helm
prior to doing GIT Clone for OOM  .

is there any thing else that we should be doing prior GIT Clone . thanks to 
confirm .

with best regards
gaurav
________________________________
From: onap-discuss-boun...@lists.onap.org <onap-discuss-boun...@lists.onap.org> 
on behalf of Michael O'Brien <frank.obr...@amdocs.com>
Sent: 07 December 2017 06:28:43
To: onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] Issues with Kubectl setup and newly pulled dockers 
not showing in kubectl list ---namespace


Guarav,

   Hi, some more points – and I hope your demo went well and thanks again for 
raising all the issues that helped us fix up the Amsterdam branh.



1)      Kubectl 1.7.0 vs supported 1.8.4 – I suspect this is your issue.

  I last ran the following bootstrap script at noon on a lean machine and after 
running cd.sh had a system up after the prepull in 23 min with no issues except 
for the known aaf container.

https://github.com/obrienlabs/onap-root/blob/master/oom_rancher_setup_1.sh<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_obrienlabs_onap-2Droot_blob_master_oom-5Francher-5Fsetup-5F1.sh&d=DwMGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=ebJjFMpXijqZjbZCcbF7yJIq2ES6jM0Q-DEcP-qjjeI&m=5cCzFts_iBbaHe1sLewMT-6j4ZldAJUwTOXsIkCIo18&s=DfUFYC5b2801vRQzD2sBOY0RC6dX1KsnT5VDCKkJBbA&e=>

   Your issue may be the version of kubectl – our currently stable release 
config is as follows on the page 
https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-QuickstartInstallation<https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.onap.org_display_DW_ONAP-2Bon-2BKubernetes-23ONAPonKubernetes-2DQuickstartInstallation&d=DwMGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=ebJjFMpXijqZjbZCcbF7yJIq2ES6jM0Q-DEcP-qjjeI&m=5cCzFts_iBbaHe1sLewMT-6j4ZldAJUwTOXsIkCIo18&s=jiAroGua8T7hTO9cMJvyDfXan9Z_A5BXnE7C7B2hRZw&e=>



Rancher v1.6.10 (not the stable or latest tag)

Helm 2.3.0 on the server and client

Kubectl 1.8.4 (current stable version) – yours is 1.7.0 (I have seen 1.8.8 
running)



You should be running


curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s 
https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl





2)      Mixing raw dockers with Kubernetes managed containers

You will need a JIRA to add the Kubernetes infrastructure for AAI ESR – I don’t 
see the containers in HEAT either (aai1/aai2)

 As Alexis stated: If you need a new component – it must have a yaml set and 
deleted/created via the scripts or directly with helm – in HELM_APPS – esr is 
not there.

  Mixing raw docker run commands or docker-compose up – will be outside the 
“onap” namespace setup by kubernetes – they can run independently and be 
accessed - but they will be out of the 30xxx namespace – as in not managed by 
Kubernetes.   ESR would need a config set, a set of template yamls, an 
all-services.yaml to set the external port mappings, a Chart.yaml for helm and 
a values.yaml to list the docker images and their version tags.

       As I remember ESR (External System Registrar) is part of AAI  portal 
onboarding – I did see these two images in the older manifest but currently the 
9 container AAI pod does not have a definition for ESR.

https://gerrit.onap.org/r/gitweb?p=integration.git;a=blob;f=version-manifest/src/main/resources/docker-manifest.csv;h=35e992adc0678ccd98328d33c9a5ffc88dbb4dfa;hb=refs/heads/master<https://urldefense.proofpoint.com/v2/url?u=https-3A__gerrit.onap.org_r_gitweb-3Fp-3Dintegration.git-3Ba-3Dblob-3Bf-3Dversion-2Dmanifest_src_main_resources_docker-2Dmanifest.csv-3Bh-3D35e992adc0678ccd98328d33c9a5ffc88dbb4dfa-3Bhb-3Drefs_heads_master&d=DwMGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=ebJjFMpXijqZjbZCcbF7yJIq2ES6jM0Q-DEcP-qjjeI&m=5cCzFts_iBbaHe1sLewMT-6j4ZldAJUwTOXsIkCIo18&s=jNmeaazdQA2G0QlCXn3Wlhu0vGv6-WZ0eKFwaVCT9aY&e=>





For a reference Amsterdam install to check on – see the live CD server

And the healthcheck references  (Thanks to Shane Daniel in adding this 
dashboard to the elk stack)



http://jenkins.onap.info/job/oom-cd/606/console<https://urldefense.proofpoint.com/v2/url?u=http-3A__jenkins.onap.info_job_oom-2Dcd_606_console&d=DwMGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=ebJjFMpXijqZjbZCcbF7yJIq2ES6jM0Q-DEcP-qjjeI&m=5cCzFts_iBbaHe1sLewMT-6j4ZldAJUwTOXsIkCIo18&s=AjgOqpx5ksZwSpBJK-hsk4vOKHttbAa4YufYVpHGvbI&e=>





http://kibana.onap.info:5601/app/kibana#/dashboard/AWAtvpS63NTXK5mX2kuS?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now-24h,mode:quick,to:now))&_a=(description:'',filters:!(),options:(darkTheme:!f),panels:!((col:1,id:AWAts77k3NTXK5mX2kuM,panelIndex:1,row:1,size_x:8,size_y:3,type:visualization),(col:9,id:AWAtuTVI3NTXK5mX2kuP,panelIndex:2,row:1,size_x:4,size_y:3,type:visualization),(col:1,id:AWAtuBTY3NTXK5mX2kuO,panelIndex:3,row:7,size_x:6,size_y:3,type:visualization),(col:1,id:AWAttmqB3NTXK5mX2kuN,panelIndex:4,row:4,size_x:6,size_y:3,type:visualization),(col:7,id:AWAtvHtY3NTXK5mX2kuR,panelIndex:6,row:4,size_x:6,size_y:6,type:visualization)),query:(match_all:()),timeRestore:!f,title:'CD%20Health%20Check',uiState:(),viewMode:view)



thank you

/michael





From: Alexis de Talhouët [mailto:adetalhoue...@gmail.com]
Sent: Wednesday, December 6, 2017 11:10
To: Gaurav Gupta (c) <guptagau...@vmware.com>
Cc: onap-discuss@lists.onap.org; Michael O'Brien <frank.obr...@amdocs.com>; 
Ramesh Tammana <rame...@vmware.com>; Ramki Krishnan <ram...@vmware.com>; Arun 
Arora (c) <aroraa...@vmware.com>
Subject: Re: Issues with Kubectl setup and newly pulled dockers not showing in 
kubectl list ---namespace



I see, so for point 1 I cannot help much. It worked well for me.



For the second point, do kubernetes manifests exist for esr and esr gui? You 
need the deployment and the service kubernetes manifests as explained in my 
previous mail.



Thanks,

Alexis



On Dec 6, 2017, at 10:57 AM, Gaurav Gupta (c) 
<guptagau...@vmware.com<mailto:guptagau...@vmware.com>> wrote:



thanks Alexis



response in line .

with best regards

gaurav

________________________________

From: Alexis de Talhouët 
<adetalhoue...@gmail.com<mailto:adetalhoue...@gmail.com>>
Sent: 06 December 2017 19:57
To: Gaurav Gupta (c)
Cc: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>; Michael 
O'Brien; Ramesh Tammana; Ramki Krishnan; Arun Arora (c)
Subject: Re: Issues with Kubectl setup and newly pulled dockers not showing in 
kubectl list ---namespace



Hi,



Some answers inline.



Thanks,

Alexis



On Dec 6, 2017, at 8:34 AM, Gaurav Gupta (c) 
<guptagau...@vmware.com<mailto:guptagau...@vmware.com>> wrote:



Hi All , Michal , Alexis



We are into 2 very distinct issues while trying vDNS Demo on OOM .



issue 1 - kubectl

                     post following below information .

 1. curl -LO 
https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kubectl<https://urldefense.proofpoint.com/v2/url?u=https-3A__storage.googleapis.com_kubernetes-2Drelease_release_v1.7.0_bin_linux_amd64_kubectl&d=DwMFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=ebJjFMpXijqZjbZCcbF7yJIq2ES6jM0Q-DEcP-qjjeI&m=sMfOd3IsgFDEQjC8M-dExvTijIpkq549dAV5SUgIXqA&s=U5DyMgpJ6JviFTgJx26Y47cLxgUN47vVnIn0wQINhtw&e=>

  2. chmod +x ./kubectl

  3. mv kubectl /usr/local/bin

   4 setting up config in ~/.kube/config



5.     while running kubectl cluster-info an dump



oot@k8s-node-0-30a414d1-5a13-41a5-a15a-5970f7fb74ed:~/.kube#<mailto:oot@k8s-node-0-30a414d1-5a13-41a5-a15a-5970f7fb74ed:~/.kube#>
 kubectl cluster-info dump

Unable to connect to the server: dial tcp 10.110.208.207:443: i/o timeout



root@k8s-node-0-30a414d1-5a13-41a5-a15a-5970f7fb74ed:~/.kube#<mailto:root@k8s-node-0-30a414d1-5a13-41a5-a15a-5970f7fb74ed:~/.kube#>
 kubectl cluster-info

Kubernetes master is running at 
https://10.110.208.207:443/<https://urldefense.proofpoint.com/v2/url?u=https-3A__10.110.208.207-3A443_&d=DwMFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=ebJjFMpXijqZjbZCcbF7yJIq2ES6jM0Q-DEcP-qjjeI&m=sMfOd3IsgFDEQjC8M-dExvTijIpkq549dAV5SUgIXqA&s=J_jRQV7D3-6OX_JlBX4KkAVZGq0KIka2-kSq-bZfJlw&e=>



AdT: I’m not sure to understand the issue been raised here. Can you clarify?



[Gaurav ] -  I think I have some issues with  kubectl setup  so if some one has 
attempted the kubectl installation and faced similar issues and resolved them , 
It will help me as well .

This is needed so that OOM based deployment can be done and health check 
scripts refers to kubectl commands .



issue 2 - We pulled 2 esr dockers manually  inside our OOM deployment . However 
the new dockers are not getting listed in kubectl commands .  So, essentially 
these containers dont have any specific IP address attached to them along with 
the port.



  questions

                 -  Are we missing something - I mean is this even valid 
operation in OOM ?  .



AdT: Pulling docker images and starting them will not make them appear in 
Kubernetes UI. For that, you need to create Kubernetes manifests defining the 
deployment, e.g. how the docker container should looks like (port, entrypoint, 
volume, pre-conditions, …) and the service, exposing the ports defined in the 
deployment to the external world (when using NodePort).



[Gaurav ] - The use case is - we had OOM Deployement done with few dockers , 
But we wanted to pull 2 more dockers i.e esr and esr gui . We did pull the 
dockers from nexsus repo  but we donot see them getting listed in the kubectl 
list --namespaces .Due to this we are also assuming that No Port /IP Address is 
getting assigned to those .

Can this use work or we have to do a complete OOM Deployment fresh ?  .





                 -    is there anything else that  we should be doing in this 
situation ? .



thanks in advance



with best regards

gaurav







This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at 
https://www.amdocs.com/about/email-disclaimer<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amdocs.com_about_email-2Ddisclaimer&d=DwMGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=ebJjFMpXijqZjbZCcbF7yJIq2ES6jM0Q-DEcP-qjjeI&m=5cCzFts_iBbaHe1sLewMT-6j4ZldAJUwTOXsIkCIo18&s=ljjrQT30XWqlqABdyrShDF3K6l0TepAz033JbP5nB9k&e=>
_______________________________________________
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to