It is possible to define an order by using readinessCheck init container. It is 
done in many places like sdc (although there are other problems there that 
should be solved soon by SDC project)
Which component is 8 containers?

Thanks,
Borislav Glozman
O:+972.9.776.1988
M:+972.52.2835726

Amdocs a Platinum member of ONAP


-----Original Message-----
From: FREEMAN, BRIAN D [mailto:bf1...@att.com] 
Sent: Wednesday, February 7, 2018 5:43 PM
To: Borislav Glozman <borislav.gloz...@amdocs.com>; Mandeep Khinda 
<mandeep.khi...@amdocs.com>; Alexis de Talhouët <adetalhoue...@gmail.com>
Cc: onap-discuss@lists.onap.org
Subject: RE: [onap-discuss] [OOM] restart an entire POD ?

Agree and for simple components I do that. When there are 8 pods in a component 
getting the order correct is important/a pain.

Brian


-----Original Message-----
From: Borislav Glozman [mailto:borislav.gloz...@amdocs.com] 
Sent: Wednesday, February 07, 2018 10:41 AM
To: Mandeep Khinda <mandeep.khi...@amdocs.com>; FREEMAN, BRIAN D 
<bf1...@att.com>; Alexis de Talhouët <adetalhoue...@gmail.com>
Cc: onap-discuss@lists.onap.org
Subject: RE: [onap-discuss] [OOM] restart an entire POD ?

What I do to bounce a pod is just deleting it.

Helm will recreate it by itself.



Example:

root@borislav-rancher-test:/opt/oom/kubernetes/oneclick# kubectl get pods -n 
onap-mso

NAME                       READY     STATUS    RESTARTS   AGE

mariadb-6487b74997-9hcpg   1/1       Running   0          2d

mso-6d6f86958b-n2h7p       2/2       Running   0          2d

root@borislav-rancher-test:/opt/oom/kubernetes/oneclick# kubectl delete po -n 
onap-mso mso-6d6f86958b-n2h7p

pod "mso-6d6f86958b-n2h7p" deleted

root@borislav-rancher-test:/opt/oom/kubernetes/oneclick# kubectl get pods -n 
onap-mso -w

NAME                       READY     STATUS        RESTARTS   AGE

mariadb-6487b74997-9hcpg   1/1       Running       0          2d

mso-6d6f86958b-l7tk9       0/2       Init:0/1      0          2s

mso-6d6f86958b-n2h7p       2/2       Terminating   0          2d

mso-6d6f86958b-n2h7p   0/2       Terminating   0         2d

mso-6d6f86958b-l7tk9   0/2       Init:0/1   0         13s

mso-6d6f86958b-l7tk9   0/2       PodInitializing   0         17s

mso-6d6f86958b-l7tk9   1/2       Running   0         19s

mso-6d6f86958b-l7tk9   2/2       Running   0         30s



Thanks,

Borislav Glozman

O:+972.9.776.1988

M:+972.52.2835726



Amdocs a Platinum member of ONAP



-----Original Message-----

From: onap-discuss-boun...@lists.onap.org 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of Mandeep Khinda

Sent: Wednesday, February 7, 2018 5:17 PM

To: FREEMAN, BRIAN D <bf1...@att.com>; Alexis de Talhouët 
<adetalhoue...@gmail.com>

Cc: onap-discuss@lists.onap.org

Subject: Re: [onap-discuss] [OOM] restart an entire POD ?



One negative to deleteAll and our helm charts in their current incarnation is 
that it deletes everything in the namespace including services (Pod and cluster 
IPs will change when they come back), database processes etc (not the persisted 
data).  



If you are impatient like I am, I target just the deployment I want to bounce 
by exporting the current running yaml and then following something similar to 
the link Josef had sent.



For example: This is what I would use to bounce just the SO Jboss container:

kubectl -n onap-mso get deployment mso -o=yaml > /tmp/mso.app.yaml kubectl -n 
onap-mso delete -f /tmp/mso.app.yaml kubectl -n onap-mso create -f 
/tmp/mso.app.yaml





Mandeep Khinda

Software Development

Open Network Division

 

+1.613.595.5132 (office)

 



 

Read the latest on Amdocs.com 
<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.amdocs.com_&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=XQlpPNDe0wJW31pqXpbEZL8hMiyEikP3zy-m4Q6t5o8&s=KaEcYSlWDL-NkM640m2pDRkpiioOHlsRJ33iJ1uyjio&e=
 > and the Amdocs blog network 
<https://urldefense.proofpoint.com/v2/url?u=http-3A__blogs.amdocs.com_&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=XQlpPNDe0wJW31pqXpbEZL8hMiyEikP3zy-m4Q6t5o8&s=Wx4aXKE3JvDNyTOjO2ClEltN6yvplMVTySQ_Cy7lhc0&e=
 > – and follow us on Facebook 
<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.facebook.com_Amdocs&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=XQlpPNDe0wJW31pqXpbEZL8hMiyEikP3zy-m4Q6t5o8&s=KcACu84azxvNNEhdWpVZoaE6zYEtvV0JaHdr791eGO0&e=
 >, Twitter 
<https://urldefense.proofpoint.com/v2/url?u=http-3A__twitter.com_Amdocs&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=XQlpPNDe0wJW31pqXpbEZL8hMiyEikP3zy-m4Q6t5o8&s=crrB78xq0LmL9dyACM13vb92Xz0CRVgqW1G0Yf_XdeQ&e=
 >, LinkedIn 
<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.linkedin.com_company_amdocs&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=XQlpPNDe0wJW31pqXpbEZL8hMiyEikP3zy-m4Q6t5o8&s=rkIIA95JYA8HIqi5TrNmFmLMkZRAjerE6Pc-3XxjLeo&e=
 > and YouTube 
<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.youtube.com_amdocs&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=XQlpPNDe0wJW31pqXpbEZL8hMiyEikP3zy-m4Q6t5o8&s=aGS7mDmYzhK5rUMYuwzKpFWd5yaCxybOSH2cpazmhQM&e=
 >. 

 



On 2018-02-07, 10:00 AM, "onap-discuss-boun...@lists.onap.org on behalf of 
FREEMAN, BRIAN D" <onap-discuss-boun...@lists.onap.org on behalf of 
bf1...@att.com> wrote:



    OK

    

    I assume deleteAll.sh does not remove the dockernfs so persistant data 
should not be lost just any "fixes" to the container config.

    

    I can work with that.

    

    Brian

    

    

    -----Original Message-----

    From: Alexis de Talhouët [mailto:adetalhoue...@gmail.com] 

    Sent: Wednesday, February 07, 2018 9:56 AM

    To: FREEMAN, BRIAN D <bf1...@att.com>

    Cc: onap-discuss@lists.onap.org; Mike Elliott <mike.elli...@amdocs.com>

    Subject: Re: [onap-discuss] [OOM] restart an entire POD ?

    

    Hi Brian,

    

    Those issues are tracked in JIRA already. Adding Mike that is looking at it 
(I think).

    

    About your question, you cannot do this through K8S UI; at least, not that 
I’m aware of.

    But using our scripts, you can delete and create a specific app.

    

    For instance:

    ./oom/kubernetes/oneclick/deleteAll.sh -n onap -a aai <— will delete the 
whole AAI namespace (deployment and services)

    ./oom/kubernetes/oneclick/createAll.sh -n onap -a aai <— will create the 
whole AAI namespace (deployment and services)

    

    I’m not sure this is what you’re after, but that’s how I do it when I need 
to bounce a whole application (e.g. all the containers of an app).

    

    Alexis

    

    > On Feb 7, 2018, at 9:34 AM, FREEMAN, BRIAN D <bf1...@att.com> wrote:

    > 

    > Michael, Alexi,

    > 

    > I'm having race conditions when I use OOM in Azure where the health check 
passes but distribution fails (MSO and AAI never get notified).

    > 

    > I restarted the SO front end POD and SO successfully picked up a model 
distribution.

    > 

    > I tried to restart just the AAI Model loader but that didnt seem to work 
so I need to restart all of AAI

    > 

    > I suspect that SO and AAI came up before DMaaP was up but cant confirm 
that.

    > 

    > Is there an easy / safe way to do restart an entire domain through the K8 
portal ?

    > 

    > Feel free to point me at the right documentation on the wiki if I am just 
missing that guidance.

    > 

    > Brian

    > 

    > _______________________________________________

    > onap-discuss mailing list

    > onap-discuss@lists.onap.org

    > 
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.onap.org_mailman_listinfo_onap-2Ddiscuss&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=-pj2hS0sQHfti5uKrngRGpvwXRLbxAppOnyPro6DGyA&s=f7jUNSvpSscVwuhbekP_wy4_NKgR99_Iu8pYxQF4-Y0&e=
 

    

    _______________________________________________

    onap-discuss mailing list

    onap-discuss@lists.onap.org

    
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.onap.org_mailman_listinfo_onap-2Ddiscuss&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=XQlpPNDe0wJW31pqXpbEZL8hMiyEikP3zy-m4Q6t5o8&s=APNGxk6CgNaDqy7p9pOu7PVtxW1XCb3-doYIRHUlZL4&e=
 

    



This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,



you may review at 
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amdocs.com_about_email-2Ddisclaimer&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=XQlpPNDe0wJW31pqXpbEZL8hMiyEikP3zy-m4Q6t5o8&s=3VR6uqGMvHkZX_iGkDOpoAyItPyr1z9-Z8mc4rM0BI8&e=
  
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amdocs.com_about_email-2Ddisclaimer&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=XQlpPNDe0wJW31pqXpbEZL8hMiyEikP3zy-m4Q6t5o8&s=3VR6uqGMvHkZX_iGkDOpoAyItPyr1z9-Z8mc4rM0BI8&e=
 >

_______________________________________________

onap-discuss mailing list

onap-discuss@lists.onap.org

https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.onap.org_mailman_listinfo_onap-2Ddiscuss&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=XQlpPNDe0wJW31pqXpbEZL8hMiyEikP3zy-m4Q6t5o8&s=APNGxk6CgNaDqy7p9pOu7PVtxW1XCb3-doYIRHUlZL4&e=
 

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at 
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amdocs.com_about_email-2Ddisclaimer&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=XQlpPNDe0wJW31pqXpbEZL8hMiyEikP3zy-m4Q6t5o8&s=3VR6uqGMvHkZX_iGkDOpoAyItPyr1z9-Z8mc4rM0BI8&e=
  
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amdocs.com_about_email-2Ddisclaimer&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=XQlpPNDe0wJW31pqXpbEZL8hMiyEikP3zy-m4Q6t5o8&s=3VR6uqGMvHkZX_iGkDOpoAyItPyr1z9-Z8mc4rM0BI8&e=
 >
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>
_______________________________________________
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to