One negative to deleteAll and our helm charts in their current incarnation is 
that it deletes everything in the namespace including services (Pod and cluster 
IPs will change when they come back), database processes etc (not the persisted 
data).  

If you are impatient like I am, I target just the deployment I want to bounce 
by exporting the current running yaml and then following something similar to 
the link Josef had sent.

For example: This is what I would use to bounce just the SO Jboss container:
kubectl -n onap-mso get deployment mso -o=yaml > /tmp/mso.app.yaml
kubectl -n onap-mso delete -f /tmp/mso.app.yaml
kubectl -n onap-mso create -f /tmp/mso.app.yaml


Mandeep Khinda
Software Development
Open Network Division
 
+1.613.595.5132 (office)
 

 
Read the latest on Amdocs.com <http://www.amdocs.com/> and the Amdocs blog 
network <http://blogs.amdocs.com/> – and follow us on Facebook 
<http://www.facebook.com/Amdocs>, Twitter <http://twitter.com/Amdocs>, LinkedIn 
<http://www.linkedin.com/company/amdocs> and YouTube 
<http://www.youtube.com/amdocs>. 
 

On 2018-02-07, 10:00 AM, "onap-discuss-boun...@lists.onap.org on behalf of 
FREEMAN, BRIAN D" <onap-discuss-boun...@lists.onap.org on behalf of 
bf1...@att.com> wrote:

    OK
    
    I assume deleteAll.sh does not remove the dockernfs so persistant data 
should not be lost just any "fixes" to the container config.
    
    I can work with that.
    
    Brian
    
    
    -----Original Message-----
    From: Alexis de Talhouët [mailto:adetalhoue...@gmail.com] 
    Sent: Wednesday, February 07, 2018 9:56 AM
    To: FREEMAN, BRIAN D <bf1...@att.com>
    Cc: onap-discuss@lists.onap.org; Mike Elliott <mike.elli...@amdocs.com>
    Subject: Re: [onap-discuss] [OOM] restart an entire POD ?
    
    Hi Brian,
    
    Those issues are tracked in JIRA already. Adding Mike that is looking at it 
(I think).
    
    About your question, you cannot do this through K8S UI; at least, not that 
I’m aware of.
    But using our scripts, you can delete and create a specific app.
    
    For instance:
    ./oom/kubernetes/oneclick/deleteAll.sh -n onap -a aai <— will delete the 
whole AAI namespace (deployment and services)
    ./oom/kubernetes/oneclick/createAll.sh -n onap -a aai <— will create the 
whole AAI namespace (deployment and services)
    
    I’m not sure this is what you’re after, but that’s how I do it when I need 
to bounce a whole application (e.g. all the containers of an app).
    
    Alexis
    
    > On Feb 7, 2018, at 9:34 AM, FREEMAN, BRIAN D <bf1...@att.com> wrote:
    > 
    > Michael, Alexi,
    > 
    > I'm having race conditions when I use OOM in Azure where the health check 
passes but distribution fails (MSO and AAI never get notified).
    > 
    > I restarted the SO front end POD and SO successfully picked up a model 
distribution.
    > 
    > I tried to restart just the AAI Model loader but that didnt seem to work 
so I need to restart all of AAI
    > 
    > I suspect that SO and AAI came up before DMaaP was up but cant confirm 
that.
    > 
    > Is there an easy / safe way to do restart an entire domain through the K8 
portal ?
    > 
    > Feel free to point me at the right documentation on the wiki if I am just 
missing that guidance.
    > 
    > Brian
    > 
    > _______________________________________________
    > onap-discuss mailing list
    > onap-discuss@lists.onap.org
    > 
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.onap.org_mailman_listinfo_onap-2Ddiscuss&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=e3d1ehx3DI5AoMgDmi2Fzw&m=-pj2hS0sQHfti5uKrngRGpvwXRLbxAppOnyPro6DGyA&s=f7jUNSvpSscVwuhbekP_wy4_NKgR99_Iu8pYxQF4-Y0&e=
 
    
    _______________________________________________
    onap-discuss mailing list
    onap-discuss@lists.onap.org
    https://lists.onap.org/mailman/listinfo/onap-discuss
    

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>
_______________________________________________
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to