Re: [onap-discuss] [**EXTERNAL**] Service distribution error on latest ONAP/OOM

2018-01-31 Thread Ramanarayanan, Karthick
Yes.

I am running in k8s cluster.

I haven't moved to the latest.

I have got the closed loop demo (without dcae of course!) working fine on Dec 
21st setup now.

Regards,

-Karthick


From: Alexis de Talhouët <adetalhoue...@gmail.com>
Sent: Wednesday, January 31, 2018 5:18:42 AM
To: Ramanarayanan, Karthick
Cc: BRIAN D FREEMAN; onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] [**EXTERNAL**] Service distribution error on latest 
ONAP/OOM

Hello Karthick, Brian

I think, thanks to Marco, we have identified one piece of the problem.
I guess you guys are running a K8S cluster. The way UEB is configured in SDC is 
using the K8S hosts IPs, so DCAE service (when deployed), can reach it when 
retrieving the config using the following request (replace 
K8S_HOST<http://k8s_host:30205/sdc/v1/distributionUebCluster> with one of your 
host ip):

curl -X GET \
  
http://K8S_HOST:30205/sdc/v1/distributionUebCluster<http://k8s_host:30205/sdc/v1/distributionUebCluster>
 \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'X-ECOMP-InstanceID: mso' \
  -H 'authorization: Basic 
dmlkOktwOGJKNFNYc3pNMFdYbGhhazNlSGxjc2UyZ0F3ODR2YW9HR21KdlV5MlU=‘

I guess, if you try this request on the setup where it was failing before, 
you’ll get the list where the first elements look ok, but the second one is 
wrong, See https://jira.onap.org/browse/OOM-638 for more details.
This has now been fix.

That said, it seems this is not it, and there is still something breaking SDC 
UEB registration.

Can you confirm you’re running in a K8S cluster?  I’m looking into this.

Alexis

On Jan 26, 2018, at 1:22 PM, Ramanarayanan, Karthick 
<krama...@ciena.com<mailto:krama...@ciena.com>> wrote:

The sdc-backend was always a suspect in my setup (56 core) and I invariably 
used to restart the backend pods to get the backend health checks to succeed:
curl http://127.0.0.1:30205/sdc2/rest/healthCheck

This returns "Service unavailable" when backend doesn't come up. If you restart 
the cassandra/es/kibana pods and then restart the backend, it would come up.

In my single node k8s host (k8s directly on the host as in my earlier runs),
I see health check component DE for backend failing. (distribution engine)

Everything else is up.

curl http://127.0.0.1:30205/sdc2/rest/healthCheck


 {
  "healthCheckComponent": "DE",
  "healthCheckStatus": "DOWN",
  "description": "U-EB cluster is not available"
},


This probably implies that in my setup, the UebServers list fetched in the 
backend catalog code before running the DistributionHealthCheck servers fetched 
from the distributionEngine configuration is not proper. (when running without 
dcae vm or dcae disabled)

This is probably the reason why distribution for service fails with policy 
exception.
Its not able to find the ueb server list perhaps when dcae is disabled.
Alexis would know best.

Regards,
-Karthick

From: FREEMAN, BRIAN D <bf1...@att.com<mailto:bf1...@att.com>>
Sent: Friday, January 26, 2018 9:52:24 AM
To: Alexis de Talhouët; Ramanarayanan, Karthick
Cc: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: RE: [onap-discuss] [**EXTERNAL**] Service distribution error on latest 
ONAP/OOM

Alexi,



I cant get OOM install to work today (it was working yesterday) - seems to fail 
on sdc - doesnt pass healthcheck due to sdc-be as best I can tell .



I use cd.sh should i use the 4 step process below instead ?



Brian





-Original Message-

From: 
onap-discuss-boun...@lists.onap.org<mailto:onap-discuss-boun...@lists.onap.org> 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of Alexis de Talhouët

Sent: Friday, January 26, 2018 8:50 AM

To: Ramanarayanan, Karthick <krama...@ciena.com<mailto:krama...@ciena.com>>

Cc: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>

Subject: Re: [onap-discuss] [**EXTERNAL**] Service distribution error on latest 
ONAP/OOM



Karthick,



I’ve just re-tested on latest Amsterdam, and distribute does work fine.



I don’t know if you have redeploy the whole ONAP or not, but understand that 
the issue you had with distribution not working was an issue

impacting so aai and sdc.

The reason is, sdc is configured with the ueb cluster ip address (dmaap, the 
message bus basically), and the way ueb is configured in sdc is using

external access to dmaap, using the k8s node ip instead of the internal 
networking of k8s (e.g. dmaap.onap-message-router).

This change was done recently to accommodate DCAEGEN2 service-change-handler 
micro-service that has to connect to dmaap.

sdc has an api so one can retrieve the ueb cluster ips, 
/sdc/v1/distributionUebCluster, and all the consumer of sdc distribute are 
using the sdc-distribution-client application,

provided by sdc, th

Re: [onap-discuss] Demo update-vfw policy script when running without closed loop

2018-01-29 Thread Ramanarayanan, Karthick
Yes they were called for sure since vpacketinit was running run fw_demo script 
doing a curl PUT on the restconf server on port 8183 which wasn't instantiated 
as honeycomb start had failed mentioned earlier.

But some issues with that.

That's what is being checked.



From: FREEMAN, BRIAN D <bf1...@att.com>
Sent: Monday, January 29, 2018 4:25:09 PM
To: Ramanarayanan, Karthick; PLATANIA, MARCO; onap-discuss@lists.onap.org
Subject: [**EXTERNAL**] Re: [onap-discuss] Demo update-vfw policy script when 
running without closed loop

Onap just instantiates, check thast the vpng init and install scripts called 
from the heat init got called and can access gerr it and nexus



Sent via the Samsung Galaxy S8, an AT 4G LTE smartphone


 Original message 
From: "Ramanarayanan, Karthick" <krama...@ciena.com>
Date: 1/29/18 6:22 PM (GMT-06:00)
To: "FREEMAN, BRIAN D" <bf1...@att.com>, "PLATANIA, MARCO" 
<plata...@research.att.com>, onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] Demo update-vfw policy script when running without 
closed loop


Thanks Brian.

ssh access from appc pod to the vPG vm was available.

But I don't see any netconf server running on vpg vm on port 2831 (or any port 
for that matter).

The vpg provisioning did happen as part of vnf create as the vpacketgen_init 
script is running.

But no honeycomb start or netconf server running on port 2831.


Doing a manual Honeycomb start goes and ultimately fails with unable to open 
vpp management connection.

Will check.

Thanks for your help.

Something wrong with the vpp configuration on the vnf that was provisioned by 
onap!





From: FREEMAN, BRIAN D <bf1...@att.com>
Sent: Monday, January 29, 2018 3:51:15 PM
To: Ramanarayanan, Karthick; PLATANIA, MARCO; onap-discuss@lists.onap.org
Subject: [**EXTERNAL**] RE: [onap-discuss] Demo update-vfw policy script when 
running without closed loop


The log looked like the karaf log from ODL going out to the traffic generator. 
vPNG.





You should be able to ssh from the ODL pod to the oam ip of the vPNG vm on port 
2831 (not sure if you are using 2831 or 1830 – the karaf log is saying APPC is 
trying to contact on port 2831.



Try a ping first from the APPC POD to the vPNG and then ssh on the port you 
configured in the netconf mount on appc.



Brian





From: Ramanarayanan, Karthick [mailto:krama...@ciena.com]
Sent: Monday, January 29, 2018 6:21 PM
To: FREEMAN, BRIAN D <bf1...@att.com>; PLATANIA, MARCO 
<plata...@research.att.com>; onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] Demo update-vfw policy script when running without 
closed loop



Hi Brian,

 I am trying to connect from inside appc pod from the k8s host. (k8s in my 
setup is directly on a host)

 That log was obtained from inside appc pod-> ssh to ODL: log:tail



Regards,

-Karthick



From: FREEMAN, BRIAN D <bf1...@att.com<mailto:bf1...@att.com>>
Sent: Monday, January 29, 2018 3:15:50 PM
To: Ramanarayanan, Karthick; PLATANIA, MARCO; 
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: [**EXTERNAL**] RE: [onap-discuss] Demo update-vfw policy script when 
running without closed loop



Try via ssh from APPC. Its probably a connectivity issue over the OAM network.



Brian





From: 
onap-discuss-boun...@lists.onap.org<mailto:onap-discuss-boun...@lists.onap.org> 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of Ramanarayanan, 
Karthick
Sent: Monday, January 29, 2018 6:10 PM
To: PLATANIA, MARCO 
<plata...@research.att.com<mailto:plata...@research.att.com>>; 
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: Re: [onap-discuss] Demo update-vfw policy script when running without 
closed loop



I have tried admin/admin as well for netconf username/password.

Doesn't work.

It seems its having issues connecting to netconf server on port 1830.





From: Ramanarayanan, Karthick
Sent: Monday, January 29, 2018 3:07:28 PM
To: PLATANIA, MARCO (MARCO); 
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: Re: [onap-discuss] Demo update-vfw policy script when running without 
closed loop



This is the ODL karaf log from inside appc pod :

ssh -p 8101 karaf@localhost

log:tail



pertaining to the appc mount put request for your perusal.



2018-01-29 23:01:46,170 | INFO  | on-dispatcher-52 | AbstractNetconfTopology
  | 354 - netconf-topology-config - 1.2.1.Carbon | Connecting 
RemoteDevice{Uri [_value=826d1073-d4cc-4064-bb29-d815701b0d6a]} , with config 
Node{getNodeId=Uri [_value=826d1073-d4cc-4064-bb29-d815701b0d6a], 
augmentations={interface 
org.opendaylight.yang.gen.v1.urn.opendaylight.netconf.node.topology.rev150114.NetconfNode=NetconfNode{getActorResponseWaitTime=5,
 getBetweenAttemptsTim

Re: [onap-discuss] Demo update-vfw policy script when running without closed loop

2018-01-29 Thread Ramanarayanan, Karthick
Hi Brian,

 I am trying to connect from inside appc pod from the k8s host. (k8s in my 
setup is directly on a host)

 That log was obtained from inside appc pod-> ssh to ODL: log:tail


Regards,

-Karthick


From: FREEMAN, BRIAN D <bf1...@att.com>
Sent: Monday, January 29, 2018 3:15:50 PM
To: Ramanarayanan, Karthick; PLATANIA, MARCO; onap-discuss@lists.onap.org
Subject: [**EXTERNAL**] RE: [onap-discuss] Demo update-vfw policy script when 
running without closed loop


Try via ssh from APPC. Its probably a connectivity issue over the OAM network.



Brian





From: onap-discuss-boun...@lists.onap.org 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of Ramanarayanan, 
Karthick
Sent: Monday, January 29, 2018 6:10 PM
To: PLATANIA, MARCO <plata...@research.att.com>; onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] Demo update-vfw policy script when running without 
closed loop



I have tried admin/admin as well for netconf username/password.

Doesn't work.

It seems its having issues connecting to netconf server on port 1830.



____

From: Ramanarayanan, Karthick
Sent: Monday, January 29, 2018 3:07:28 PM
To: PLATANIA, MARCO (MARCO); 
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: Re: [onap-discuss] Demo update-vfw policy script when running without 
closed loop



This is the ODL karaf log from inside appc pod :

ssh -p 8101 karaf@localhost

log:tail



pertaining to the appc mount put request for your perusal.



2018-01-29 23:01:46,170 | INFO  | on-dispatcher-52 | AbstractNetconfTopology
  | 354 - netconf-topology-config - 1.2.1.Carbon | Connecting 
RemoteDevice{Uri [_value=826d1073-d4cc-4064-bb29-d815701b0d6a]} , with config 
Node{getNodeId=Uri [_value=826d1073-d4cc-4064-bb29-d815701b0d6a], 
augmentations={interface 
org.opendaylight.yang.gen.v1.urn.opendaylight.netconf.node.topology.rev150114.NetconfNode=NetconfNode{getActorResponseWaitTime=5,
 getBetweenAttemptsTimeoutMillis=2000, getConcurrentRpcLimit=0, 
getConnectionTimeoutMillis=2, 
getCredentials=LoginPassword{getPassword=root, getUsername=root, 
augmentations={}}, getDefaultRequestTimeoutMillis=6, getHost=Host 
[_ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=172.23.3.16]]], 
getKeepaliveDelay=120, getMaxConnectionAttempts=0, getPort=PortNumber 
[_value=2831], getSchemaCacheDirectory=schema, getSleepFactor=1.5, 
isReconnectOnChangedSchema=false, isSchemaless=false, isTcpOnly=false}}}

2018-01-29 23:01:46,175 | WARN  | on-dispatcher-52 | AbstractNetconfTopology
  | 354 - netconf-topology-config - 1.2.1.Carbon | Adding keepalive facade, 
for device Uri [_value=826d1073-d4cc-4064-bb29-d815701b0d6a]

2018-01-29 23:01:46,175 | INFO  | on-dispatcher-52 | AbstractNetconfTopology
  | 354 - netconf-topology-config - 1.2.1.Carbon | Concurrent rpc limit is 
smaller than 1, no limit will be enforced for device 
RemoteDevice{826d1073-d4cc-4064-bb29-d815701b0d6a}

2018-01-29 23:01:46,203 | WARN  | a]-nio2-thread-5 | AsyncSshHandler
  | 340 - org.opendaylight.netconf.netty-util - 1.2.1.Carbon | Unable to 
setup SSH connection on channel: [id: 0x9380f506]

java.net.ConnectException: Connection refused

at sun.nio.ch.UnixAsynchronousSocketChannelImpl.checkConnect(Native 
Method)[:1.8.0_151]

at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishConnect(UnixAsynchronousSocketChannelImpl.java:252)[:1.8.0_151]

at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish(UnixAsynchronousSocketChannelImpl.java:198)[:1.8.0_151]

at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.onEvent(UnixAsynchronousSocketChannelImpl.java:213)[:1.8.0_151]

at sun.nio.ch.EPollPort$EventHandlerTask.run(EPollPort.java:293)[:1.8.0_151]

at java.lang.Thread.run(Thread.java:748)[:1.8.0_151]

2018-01-29 23:01:48,220 | WARN  | a]-nio2-thread-6 | AsyncSshHandler
  | 340 - org.opendaylight.netconf.netty-util - 1.2.1.Carbon | Unable to 
setup SSH connection on channel: [id: 0x19ec55b6]

java.net.ConnectException: Connection refused

at sun.nio.ch.UnixAsynchronousSocketChannelImpl.checkConnect(Native 
Method)[:1.8.0_151]

at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishConnect(UnixAsynchronousSocketChannelImpl.java:252)[:1.8.0_151]

at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish(UnixAsynchronousSocketChannelImpl.java:198)[:1.8.0_151]

at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.onEvent(UnixAsynchronousSocketChannelImpl.java:213)[:1.8.0_151]

at sun.nio.ch.EPollPort$EventHandlerTask.run(EPollPort.java:293)[:1.8.0_151]

at java.lang.Thread.run(Thread.java:748)[:1.8.0_151]

2018-01-29 23:01:51,235 | WARN  | a]-nio2-thread-7 | AsyncSshHandler
  | 340 - org.opendaylight.netconf.netty-util - 1.2.1.Carbon | Unable to 
setup SSH connection on channel: [id: 0x0e1e1323]

java.net.ConnectException: Connection refused

at sun.nio.ch.UnixAsynchronousSocketChannelImpl.checkConnect(Nativ

Re: [onap-discuss] Demo update-vfw policy script when running without closed loop

2018-01-29 Thread Ramanarayanan, Karthick
I have tried admin/admin as well for netconf username/password.

Doesn't work.

It seems its having issues connecting to netconf server on port 1830.



From: Ramanarayanan, Karthick
Sent: Monday, January 29, 2018 3:07:28 PM
To: PLATANIA, MARCO (MARCO); onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] Demo update-vfw policy script when running without 
closed loop


This is the ODL karaf log from inside appc pod :

ssh -p 8101 karaf@localhost

log:tail


pertaining to the appc mount put request for your perusal.


2018-01-29 23:01:46,170 | INFO  | on-dispatcher-52 | AbstractNetconfTopology
  | 354 - netconf-topology-config - 1.2.1.Carbon | Connecting 
RemoteDevice{Uri [_value=826d1073-d4cc-4064-bb29-d815701b0d6a]} , with config 
Node{getNodeId=Uri [_value=826d1073-d4cc-4064-bb29-d815701b0d6a], 
augmentations={interface 
org.opendaylight.yang.gen.v1.urn.opendaylight.netconf.node.topology.rev150114.NetconfNode=NetconfNode{getActorResponseWaitTime=5,
 getBetweenAttemptsTimeoutMillis=2000, getConcurrentRpcLimit=0, 
getConnectionTimeoutMillis=2, 
getCredentials=LoginPassword{getPassword=root, getUsername=root, 
augmentations={}}, getDefaultRequestTimeoutMillis=6, getHost=Host 
[_ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=172.23.3.16]]], 
getKeepaliveDelay=120, getMaxConnectionAttempts=0, getPort=PortNumber 
[_value=2831], getSchemaCacheDirectory=schema, getSleepFactor=1.5, 
isReconnectOnChangedSchema=false, isSchemaless=false, isTcpOnly=false}}}
2018-01-29 23:01:46,175 | WARN  | on-dispatcher-52 | AbstractNetconfTopology
  | 354 - netconf-topology-config - 1.2.1.Carbon | Adding keepalive facade, 
for device Uri [_value=826d1073-d4cc-4064-bb29-d815701b0d6a]
2018-01-29 23:01:46,175 | INFO  | on-dispatcher-52 | AbstractNetconfTopology
  | 354 - netconf-topology-config - 1.2.1.Carbon | Concurrent rpc limit is 
smaller than 1, no limit will be enforced for device 
RemoteDevice{826d1073-d4cc-4064-bb29-d815701b0d6a}
2018-01-29 23:01:46,203 | WARN  | a]-nio2-thread-5 | AsyncSshHandler
  | 340 - org.opendaylight.netconf.netty-util - 1.2.1.Carbon | Unable to 
setup SSH connection on channel: [id: 0x9380f506]
java.net.ConnectException: Connection refused
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.checkConnect(Native 
Method)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishConnect(UnixAsynchronousSocketChannelImpl.java:252)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish(UnixAsynchronousSocketChannelImpl.java:198)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.onEvent(UnixAsynchronousSocketChannelImpl.java:213)[:1.8.0_151]
at sun.nio.ch.EPollPort$EventHandlerTask.run(EPollPort.java:293)[:1.8.0_151]
at java.lang.Thread.run(Thread.java:748)[:1.8.0_151]
2018-01-29 23:01:48,220 | WARN  | a]-nio2-thread-6 | AsyncSshHandler
  | 340 - org.opendaylight.netconf.netty-util - 1.2.1.Carbon | Unable to 
setup SSH connection on channel: [id: 0x19ec55b6]
java.net.ConnectException: Connection refused
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.checkConnect(Native 
Method)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishConnect(UnixAsynchronousSocketChannelImpl.java:252)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish(UnixAsynchronousSocketChannelImpl.java:198)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.onEvent(UnixAsynchronousSocketChannelImpl.java:213)[:1.8.0_151]
at sun.nio.ch.EPollPort$EventHandlerTask.run(EPollPort.java:293)[:1.8.0_151]
at java.lang.Thread.run(Thread.java:748)[:1.8.0_151]
2018-01-29 23:01:51,235 | WARN  | a]-nio2-thread-7 | AsyncSshHandler
  | 340 - org.opendaylight.netconf.netty-util - 1.2.1.Carbon | Unable to 
setup SSH connection on channel: [id: 0x0e1e1323]
java.net.ConnectException: Connection refused
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.checkConnect(Native 
Method)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishConnect(UnixAsynchronousSocketChannelImpl.java:252)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish(UnixAsynchronousSocketChannelImpl.java:198)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.onEvent(UnixAsynchronousSocketChannelImpl.java:213)[:1.8.0_151]
at sun.nio.ch.EPollPort$EventHandlerTask.run(EPollPort.java:293)[:1.8.0_151]
at java.lang.Thread.run(Thread.java:748)[:1.8.0_151]
2018-01-29 23:01:55,751 | WARN  | a]-nio2-thread-8 | AsyncSshHandler
  | 340 - org.opendaylight.netconf.netty-util - 1.2.1.Carbon | Unable to 
setup SSH connection on channel: [id: 0xd6b84044]
java.net.ConnectException: Connection refused
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.checkConnect(Native 
Method)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishConnect(UnixAsynchronousSocketChannelImpl.java:252)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish

Re: [onap-discuss] Demo update-vfw policy script when running without closed loop

2018-01-29 Thread Ramanarayanan, Karthick
 - org.opendaylight.netconf.netty-util - 1.2.1.Carbon | Unable to 
setup SSH connection on channel: [id: 0x6c1fe552]
java.net.ConnectException: Connection refused
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.checkConnect(Native 
Method)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishConnect(UnixAsynchronousSocketChannelImpl.java:252)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish(UnixAsynchronousSocketChannelImpl.java:198)[:1.8.0_151]
at 
sun.nio.ch.UnixAsynchronousSocketChannelImpl.onEvent(UnixAsynchronousSocketChannelImpl.java:213)[:1.8.0_151]
at sun.nio.ch.EPollPort$EventHandlerTask.run(EPollPort.java:293)[:1.8.0_151]
at java.lang.Thread.run(Thread.java:748)[:1.8.0_151]
2018-01-29 23:02:04,571 | INFO  | a]-nio2-thread-2 | ClientSessionImpl  
  | 30 - org.apache.sshd.core - 0.14.0 | Client session created
2018-01-29 23:02:04,571 | INFO  | o-group-thread-8 | ServerSession  
  | 30 - org.apache.sshd.core - 0.14.0 | Server session created from 
/127.0.0.1:34746
2018-01-29 23:02:04,574 | INFO  | a]-nio2-thread-2 | ClientSessionImpl  
  | 30 - org.apache.sshd.core - 0.14.0 | Start flagging packets as pending 
until key exchange is done
2018-01-29 23:02:04,574 | INFO  | a]-nio2-thread-2 | ClientSessionImpl  
  | 30 - org.apache.sshd.core - 0.14.0 | Server version string: 
SSH-2.0-SSHD-CORE-0.14.0
2018-01-29 23:02:04,855 | INFO  | Appc-Listener-1  | EventHandlerImpl   
  | 360 - appc-common - 1.2.0 | Read 0 messages from APPC-LCM-READ as 
APPC-EVENT-LISTENER-TEST/390.
2018-01-29 23:02:04,856 | INFO  | Appc-Listener-1  | EventHandlerImpl   
  | 360 - appc-common - 1.2.0 | Getting up to 10 incoming events
2018-01-29 23:02:04,856 | INFO  | Appc-Listener-1  | HttpDmaapConsumerImpl  
  | 365 - appc-dmaap-adapter-bundle - 1.2.0 | GET 
http://dmaap.onap-message-router:3904/events/APPC-LCM-READ/APPC-EVENT-LISTENER-TEST/390?timeout=6=10
2018-01-29 23:02:04,866 | ERROR | Appc-Listener-1  | HttpDmaapConsumerImpl  
  | 365 - appc-dmaap-adapter-bundle - 1.2.0 | Did not get 200 from DMaaP. 
Got 404 - 
{"mrstatus":3001,"helpURL":"https://wiki.web.att.com/display/DMAAP/DMaaP+Home","message":"No
 such topic exists.-[APPC-LCM-READ]","status":404}
2018-01-29 23:02:04,867 | INFO  | Appc-Listener-1  | HttpDmaapConsumerImpl  
  | 365 - appc-dmaap-adapter-bundle - 1.2.0 | Sleeping for 60s after failed 
request
2018-01-29 23:02:05,494 | WARN  | a]-nio2-thread-4 | AcceptAllServerKeyVerifier 
  | 30 - org.apache.sshd.core - 0.14.0 | Server at /127.0.0.1:1830 
presented unverified RSA key: a0:99:2c:0f:ef:e3:74:2f:e9:b0:b7:17:cc:4b:a5:65
2018-01-29 23:02:05,495 | INFO  | a]-nio2-thread-4 | ClientSessionImpl  
  | 30 - org.apache.sshd.core - 0.14.0 | Dequeing pending packets
2018-01-29 23:02:05,496 | INFO  | a]-nio2-thread-6 | ClientUserAuthServiceNew   
  | 30 - org.apache.sshd.core - 0.14.0 | Received SSH_MSG_USERAUTH_FAILURE
2018-01-29 23:02:05,497 | INFO  | a]-nio2-thread-7 | 
UserAuthKeyboardInteractive  | 30 - org.apache.sshd.core - 0.14.0 | 
Received Password authentication  en-US
2018-01-29 23:02:05,510 | INFO  | a]-nio2-thread-8 | ClientUserAuthServiceNew   
  | 30 - org.apache.sshd.core - 0.14.0 | Received SSH_MSG_USERAUTH_FAILURE
2018-01-29 23:02:05,510 | INFO  | a]-nio2-thread-1 | 
UserAuthKeyboardInteractive  | 30 - org.apache.sshd.core - 0.14.0 | 
Received Password authentication  en-US
2018-01-29 23:02:05,511 | INFO  | a]-nio2-thread-2 | 
UserAuthKeyboardInteractive  | 30 - org.apache.sshd.core - 0.14.0 | 
Received Password authentication  en-US
2018-01-29 23:02:05,511 | INFO  | a]-nio2-thread-3 | 
UserAuthKeyboardInteractive  | 30 - org.apache.sshd.core - 0.14.0 | 
Received Password authentication  en-US
2018-01-29 23:02:05,523 | INFO  | a]-nio2-thread-4 | ClientUserAuthServiceNew   
  | 30 - org.apache.sshd.core - 0.14.0 | Received SSH_MSG_USERAUTH_FAILURE


____
From: Ramanarayanan, Karthick
Sent: Monday, January 29, 2018 2:24:44 PM
To: PLATANIA, MARCO (MARCO); onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] Demo update-vfw policy script when running without 
closed loop


Hi Marco,

  The policy push works now with your workaround to add (?h=amsterdam) in the 
push policy script with a complete ONAP restart.

  Policies are pushed successfully for the packetgen vnf.

  Then after an appc put to network topology to configure packetgen interface 
which succeeds,

  I still don't see the appc mount points show up in appc/opendaylight 
interface.

  The packetgen vnf is up and running.

  OOM policy is 1.1.1.

  No dcae present as mentioned earlier.

  Everything else has gone fine except the appc mount.

  Appc pod logs don't reveal anything.

  What am I missing?


Regards,

-Karthick


From: PLATANIA, MARCO (MARCO) <plata...

Re: [onap-discuss] Demo update-vfw policy script when running without closed loop

2018-01-29 Thread Ramanarayanan, Karthick
Hi Marco,

  The policy push works now with your workaround to add (?h=amsterdam) in the 
push policy script with a complete ONAP restart.

  Policies are pushed successfully for the packetgen vnf.

  Then after an appc put to network topology to configure packetgen interface 
which succeeds,

  I still don't see the appc mount points show up in appc/opendaylight 
interface.

  The packetgen vnf is up and running.

  OOM policy is 1.1.1.

  No dcae present as mentioned earlier.

  Everything else has gone fine except the appc mount.

  Appc pod logs don't reveal anything.

  What am I missing?


Regards,

-Karthick


From: PLATANIA, MARCO (MARCO) <plata...@research.att.com>
Sent: Thursday, January 25, 2018 11:08:24 AM
To: Ramanarayanan, Karthick; onap-discuss@lists.onap.org
Subject: [**EXTERNAL**] Re: [onap-discuss] Demo update-vfw policy script when 
running without closed loop


Karthick,



Please look for BRMS_DEPENDENCY_VERSION at the end of 
${OOM_HOME}/kubernetes/config/docker/init/src/config/policy/opt/policy/config/pe/brmsgw.conf.



That parameter should be the same as the Policy container version number. For 
Amsterdam, it has to be either 1.1.1 or 1.1.3, depending on the Policy version 
that you are using (Amsterdam v1.1.1 or Amsterdam Maintenance v1.1.3).



Also, 
${OOM_HOME}/kubernetes/config/docker/init/src/config/policy/opt/policy/config/pe/push-policies.sh,
 line 11 should be:



wget -O cl-amsterdam-template.drl 
https://git.onap.org/policy/drools-applications/plain/controlloop/templates/archetype-cl-amsterdam/src/main/resources/archetype-resources/src/main/resources/__closedLoopControlName__.drl<https://urldefense.proofpoint.com/v2/url?u=https-3A__git.onap.org_policy_drools-2Dapplications_plain_controlloop_templates_archetype-2Dcl-2Damsterdam_src_main_resources_archetype-2Dresources_src_main_resources_-5F-5FclosedLoopControlName-5F-5F.drl=DwMGaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=wDZaSAO3tIo-zRp6uBt7V6vADdhEAPt1c39n-zHk9mE=nEQneqR5LZsDAA6MkLofGHqXk8bSQFKyPrF2G6gCY_o=>?h=amsterdam<https://urldefense.proofpoint.com/v2/url?u=https-3A__git.onap.org_policy_drools-2Dapplications_plain_controlloop_templates_archetype-2Dcl-2Damsterdam_src_main_resources_archetype-2Dresources_src_main_resources_-5F-5FclosedLoopControlName-5F-5F.drl-3Fh-3Damsterdam=DwMF-g=LFYZ-o9_HUMeMTSQicvjIg=KgFIQiUJzSC0gUhJaQxg8eC3w16GC3sKgWIcs4iIee0=LGT5idEEumK08dApXCgOxFAfXApvLoMJ1CW7sK4AiF0=qmCN2BpW641q_DVePqiSz6PvxNkhIe1v8lJoT7ifX6I=>



instead of wget -O cl-amsterdam-template.drl 
https://git.onap.org/policy/drools-applications/plain/controlloop/templates/archetype-cl-amsterdam/src/main/resources/archetype-resources/src/main/resources/__closedLoopControlName__.drl<https://urldefense.proofpoint.com/v2/url?u=https-3A__git.onap.org_policy_drools-2Dapplications_plain_controlloop_templates_archetype-2Dcl-2Damsterdam_src_main_resources_archetype-2Dresources_src_main_resources_-5F-5FclosedLoopControlName-5F-5F.drl=DwMGaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=wDZaSAO3tIo-zRp6uBt7V6vADdhEAPt1c39n-zHk9mE=nEQneqR5LZsDAA6MkLofGHqXk8bSQFKyPrF2G6gCY_o=>



(note ?h=amsterdam at the end of the correct call).



You can make these changes in your OOM local repo, as described above, or 
directly in the ONAP configuration folder in your NFS share (or local disk if 
you have a single-host K8S cluster), in 
/dockerdata-nfs/onap/policy/opt/policy/config/pe/push-policies.sh (and the same 
path for brmsgw.conf). The former approach requires to rebuild ONAP, while the 
latter requires to rebuild only Policy.



Marco







From: <onap-discuss-boun...@lists.onap.org> on behalf of "Ramanarayanan, 
Karthick" <krama...@ciena.com>
Date: Thursday, January 25, 2018 at 1:32 PM
To: "onap-discuss@lists.onap.org" <onap-discuss@lists.onap.org>
Subject: [onap-discuss] Demo update-vfw policy script when running without 
closed loop



Hi,

 In my kubernetes setup minus dcae (just vFW without closed loop),

 I am trying to mount the appc packetgen interface but I am unable to see the 
mounts in appc mounts list.

 The policy that was pushed used the kubernetes update-vfw-op-policy.sh script 
which seems to be applicable for closed loop.



 Though the policy script runs and applies the policy and restarts the policy 
pods, the get on controlloop.Params fails at the end.



 curl -v --silent --user @1b3rt:31nst31n -X GET 
http://$<https://urldefense.proofpoint.com/v2/url?u=http-3A__-24=DwQFAw=LFYZ-o9_HUMeMTSQicvjIg=KgFIQiUJzSC0gUhJaQxg8eC3w16GC3sKgWIcs4iIee0=LPXvZgE66FrVN0FIXILCjCz_Ep8xUinjYJIIJRxdf7o=IRfCSSeF5ogZBUBhykVnXlspVj64QepZK_QHx6udwfc=>{K8S_HOST}:${POLICY_DROOLS_PORT}/policy/pdp/engine/controllers/amsterdam/drools/facts/closedloop-amsterdam/org.onap.policy.controlloop.Params
  | python -m json.tool



{

"error": "amsterdam:closedloop-amsterdam:org.onap.policy.contro

Re: [onap-discuss] [**EXTERNAL**] Service distribution error on latest ONAP/OOM

2018-01-26 Thread Ramanarayanan, Karthick
The sdc-backend was always a suspect in my setup (56 core) and I invariably 
used to restart the backend pods to get the backend health checks to succeed:

curl http://127.0.0.1:30205/sdc2/rest/healthCheck


This returns "Service unavailable" when backend doesn't come up. If you restart 
the cassandra/es/kibana pods and then restart the backend, it would come up.


In my single node k8s host (k8s directly on the host as in my earlier runs),

I see health check component DE for backend failing. (distribution engine)


Everything else is up.


curl http://127.0.0.1:30205/sdc2/rest/healthCheck


 {
  "healthCheckComponent": "DE",
  "healthCheckStatus": "DOWN",
  "description": "U-EB cluster is not available"
},


This probably implies that in my setup, the UebServers list fetched in the 
backend catalog code before running the DistributionHealthCheck servers fetched 
from the distributionEngine configuration is not proper. (when running without 
dcae vm or dcae disabled)


This is probably the reason why distribution for service fails with policy 
exception.

Its not able to find the ueb server list perhaps when dcae is disabled.

Alexis would know best.


Regards,

-Karthick


From: FREEMAN, BRIAN D <bf1...@att.com>
Sent: Friday, January 26, 2018 9:52:24 AM
To: Alexis de Talhouët; Ramanarayanan, Karthick
Cc: onap-discuss@lists.onap.org
Subject: RE: [onap-discuss] [**EXTERNAL**] Service distribution error on latest 
ONAP/OOM

Alexi,



I cant get OOM install to work today (it was working yesterday) - seems to fail 
on sdc - doesnt pass healthcheck due to sdc-be as best I can tell .



I use cd.sh should i use the 4 step process below instead ?



Brian





-Original Message-

From: onap-discuss-boun...@lists.onap.org 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of Alexis de Talhouët

Sent: Friday, January 26, 2018 8:50 AM

To: Ramanarayanan, Karthick <krama...@ciena.com>

Cc: onap-discuss@lists.onap.org

Subject: Re: [onap-discuss] [**EXTERNAL**] Service distribution error on latest 
ONAP/OOM



Karthick,



I’ve just re-tested on latest Amsterdam, and distribute does work fine.



I don’t know if you have redeploy the whole ONAP or not, but understand that 
the issue you had with distribution not working was an issue

impacting so aai and sdc.

The reason is, sdc is configured with the ueb cluster ip address (dmaap, the 
message bus basically), and the way ueb is configured in sdc is using

external access to dmaap, using the k8s node ip instead of the internal 
networking of k8s (e.g. dmaap.onap-message-router).

This change was done recently to accommodate DCAEGEN2 service-change-handler 
micro-service that has to connect to dmaap.

sdc has an api so one can retrieve the ueb cluster ips, 
/sdc/v1/distributionUebCluster, and all the consumer of sdc distribute are 
using the sdc-distribution-client application,

provided by sdc, that retrieves the ueb cluster ips using the api mentioned 
before. Hence when the DCAE micro service was retrieving the ips of the ueb 
cluster, and that one

was configured using k8s networking (dmaap.onap-message-router), the micro 
service was unable to resolve this; that’s why I changed it to the k8s node ip, 
that has to be resolvable

by the DCAE’s VMs.



Hope that clarifies a little bit what happen, and explain why I recommand you 
to re-deploy the whole onap by doing the following:



- In oom/kubernetes/oneclick: ./deleteAll.sh -n onap

- In the k8s nodes, rm -rf /dockerdata-nfs

- In oom/kubernetes/config: ./createConfig.sh -n onap

- In oom/kubernetes/oneclick: ./createAll.sh -n onap



This should take no longer than 15mn as you already have the docker images in 
your k8s hosts.



Alexis



> On Jan 25, 2018, at 8:17 PM, Ramanarayanan, Karthick <krama...@ciena.com> 
> wrote:

>

> Hi Alexis,

>  I am still getting the Policy Exception error POL5000 with dcae disabled, 
> (dcaegen2 app not running as mentioned earlier).

>  I am on the latest OOM for amsterdam (policy images are 1.1.3 as verified).

>  Service distribution immediately fails.

>  policy pod logs don't indicate anything.

> They do resolve dmaap.onap-message-router fine and connected to dmaap on port 
> 3904.

>

> Regards,

> -Karthick

> From: Ramanarayanan, Karthick

> Sent: Thursday, January 25, 2018 10:33:44 AM

> To: Alexis de Talhouët

> Cc: onap-discuss@lists.onap.org; Bainbridge, David

> Subject: Re: [**EXTERNAL**] [onap-discuss] Service distribution error on 
> latest ONAP/OOM

>

> Thanks Alexis.

> Fix is looking good but I haven't moved up yet.

> Will do later.

> Regards,

> -Karthick

> From: Alexis de Talhouët <adetalhoue...@gmail.com>

> Sent: Tuesday, January 23, 2018 8:55:06 AM

> To: Ramanarayanan, Karthick

> Cc: onap

Re: [onap-discuss] [**EXTERNAL**] Service distribution error on latest ONAP/OOM

2018-01-26 Thread Ramanarayanan, Karthick
Hi Alexis,
I had redeployed onap. Complete clean deploy as you had mentioned. Of course.
There are no dcae vms and neither is dcae deploy enabled for my config.
It’s just a single k8s host as mentioned earlier.
Clean deploy. Works if I revert back to dec.21st commit mentioned earlier. 
Fails otherwise. Dcae doesn’t exist for my setup.
Regards,
Karthick

Get Outlook for iOS<https://aka.ms/o0ukef>

From: Alexis de Talhouët <adetalhoue...@gmail.com>
Sent: Friday, January 26, 2018 5:49:37 AM
To: Ramanarayanan, Karthick
Cc: onap-discuss@lists.onap.org; Bainbridge, David
Subject: Re: [**EXTERNAL**] [onap-discuss] Service distribution error on latest 
ONAP/OOM

Karthick,

I’ve just re-tested on latest Amsterdam, and distribute does work fine.

I don’t know if you have redeploy the whole ONAP or not, but understand that 
the issue you had with distribution not working was an issue
impacting so aai and sdc.
The reason is, sdc is configured with the ueb cluster ip address (dmaap, the 
message bus basically), and the way ueb is configured in sdc is using
external access to dmaap, using the k8s node ip instead of the internal 
networking of k8s (e.g. dmaap.onap-message-router).
This change was done recently to accommodate DCAEGEN2 service-change-handler 
micro-service that has to connect to dmaap.
sdc has an api so one can retrieve the ueb cluster ips, 
/sdc/v1/distributionUebCluster, and all the consumer of sdc distribute are 
using the sdc-distribution-client application,
provided by sdc, that retrieves the ueb cluster ips using the api mentioned 
before. Hence when the DCAE micro service was retrieving the ips of the ueb 
cluster, and that one
was configured using k8s networking (dmaap.onap-message-router), the micro 
service was unable to resolve this; that’s why I changed it to the k8s node ip, 
that has to be resolvable
by the DCAE’s VMs.

Hope that clarifies a little bit what happen, and explain why I recommand you 
to re-deploy the whole onap by doing the following:

- In oom/kubernetes/oneclick: ./deleteAll.sh -n onap
- In the k8s nodes, rm -rf /dockerdata-nfs
- In oom/kubernetes/config: ./createConfig.sh -n onap
- In oom/kubernetes/oneclick: ./createAll.sh -n onap

This should take no longer than 15mn as you already have the docker images in 
your k8s hosts.

Alexis

> On Jan 25, 2018, at 8:17 PM, Ramanarayanan, Karthick <krama...@ciena.com> 
> wrote:
>
> Hi Alexis,
>  I am still getting the Policy Exception error POL5000 with dcae disabled, 
> (dcaegen2 app not running as mentioned earlier).
>  I am on the latest OOM for amsterdam (policy images are 1.1.3 as verified).
>  Service distribution immediately fails.
>  policy pod logs don't indicate anything.
> They do resolve dmaap.onap-message-router fine and connected to dmaap on port 
> 3904.
>
> Regards,
> -Karthick
> From: Ramanarayanan, Karthick
> Sent: Thursday, January 25, 2018 10:33:44 AM
> To: Alexis de Talhouët
> Cc: onap-discuss@lists.onap.org; Bainbridge, David
> Subject: Re: [**EXTERNAL**] [onap-discuss] Service distribution error on 
> latest ONAP/OOM
>
> Thanks Alexis.
> Fix is looking good but I haven't moved up yet.
> Will do later.
> Regards,
> -Karthick
> From: Alexis de Talhouët <adetalhoue...@gmail.com>
> Sent: Tuesday, January 23, 2018 8:55:06 AM
> To: Ramanarayanan, Karthick
> Cc: onap-discuss@lists.onap.org; Bainbridge, David
> Subject: Re: [**EXTERNAL**] [onap-discuss] Service distribution error on 
> latest ONAP/OOM
>
> Karthick,
>
> The fix is out: 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__gerrit.onap.org_r_-23_c_28591_=DwIFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=uGTAWzUyPMwdZkxurpz3ayQguF7wwPHKTGVYtxct4is=atuQ0A3VMhbCHdyq2u3QQ3jx0y5zxUjDd-Tgq5dYRBE=
>  and has been tested.
> Expect this to be merge on a couple of hours.
>
> Please re-test and confirm it does fix your issue when you have time.
>
> Regards,
> Alexis
>
>> On Jan 22, 2018, at 11:57 AM, Ramanarayanan, Karthick <krama...@ciena.com> 
>> wrote:
>>
>> That's great Alexis.
>> Thanks.
>> (also don't be surprised if backend doesn't come up sometimes with no 
>> indicator in the log pods.
>>  Just restart cassandra, elastic search and kibana pod before restarting 
>> backend pod and it would load the user profiles in the sdc-be logs :)
>>
>> Regards,
>> -Karthick
>> From: Alexis de Talhouët <adetalhoue...@gmail.com>
>> Sent: Monday, January 22, 2018 5:10:26 AM
>> To: Ramanarayanan, Karthick
>> Cc: onap-discuss@lists.onap.org; Bainbridge, David
>> Subject: Re: [**EXTERNAL**] [onap-discuss] Service distribution error on 
>> latest ONAP/OOM
>>
>> Hi Karthick,
>>
>> Yes, I’m aware of this since y

Re: [onap-discuss] [**EXTERNAL**] Service distribution error on latest ONAP/OOM

2018-01-25 Thread Ramanarayanan, Karthick
Hi Alexis,

 I am still getting the Policy Exception error POL5000 with dcae disabled, 
(dcaegen2 app not running as mentioned earlier).

 I am on the latest OOM for amsterdam (policy images are 1.1.3 as verified).

 Service distribution immediately fails.

 policy pod logs don't indicate anything.

They do resolve dmaap.onap-message-router fine and connected to dmaap on port 
3904.



Regards,

-Karthick


From: Ramanarayanan, Karthick
Sent: Thursday, January 25, 2018 10:33:44 AM
To: Alexis de Talhouët
Cc: onap-discuss@lists.onap.org; Bainbridge, David
Subject: Re: [**EXTERNAL**] [onap-discuss] Service distribution error on latest 
ONAP/OOM


Thanks Alexis.

Fix is looking good but I haven't moved up yet.

Will do later.

Regards,

-Karthick


From: Alexis de Talhouët <adetalhoue...@gmail.com>
Sent: Tuesday, January 23, 2018 8:55:06 AM
To: Ramanarayanan, Karthick
Cc: onap-discuss@lists.onap.org; Bainbridge, David
Subject: Re: [**EXTERNAL**] [onap-discuss] Service distribution error on latest 
ONAP/OOM

Karthick,

The fix is out: 
https://gerrit.onap.org/r/#/c/28591/<https://urldefense.proofpoint.com/v2/url?u=https-3A__gerrit.onap.org_r_-23_c_28591_=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=75m0RjrhvqsPlEY75Unphh5etKxaOFSvRyoiAvYEscw=ba4Hsgaqc1z0WIqw6S2Vw8AB9sMCbSUPImAxxfkhmmA=>
 and has been tested.
Expect this to be merge on a couple of hours.

Please re-test and confirm it does fix your issue when you have time.

Regards,
Alexis

On Jan 22, 2018, at 11:57 AM, Ramanarayanan, Karthick 
<krama...@ciena.com<mailto:krama...@ciena.com>> wrote:

That's great Alexis.
Thanks.
(also don't be surprised if backend doesn't come up sometimes with no indicator 
in the log pods.
 Just restart cassandra, elastic search and kibana pod before restarting 
backend pod and it would load the user profiles in the sdc-be logs :)

Regards,
-Karthick

From: Alexis de Talhouët 
<adetalhoue...@gmail.com<mailto:adetalhoue...@gmail.com>>
Sent: Monday, January 22, 2018 5:10:26 AM
To: Ramanarayanan, Karthick
Cc: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>; 
Bainbridge, David
Subject: Re: [**EXTERNAL**] [onap-discuss] Service distribution error on latest 
ONAP/OOM

Hi Karthick,

Yes, I’m aware of this since you mentioned it last week. I reproduced the issue.
Currently implementing a fix for it. Sorry for the regression introduced.

See 
https://jira.onap.org/browse/OOM-608<https://urldefense.proofpoint.com/v2/url?u=https-3A__jira.onap.org_browse_OOM-2D608=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=f9rBnbJzOXBaXfGkT0Vpe42EeDuUacO5Bd5YM2KscZ8=Kfo_4eq50-zh0ubMpedUQQh4e6Uvr-2X4dpWAQcYYbw=>
 for more details.

Thanks,
Alexis

On Jan 19, 2018, at 4:21 PM, Ramanarayanan, Karthick 
<krama...@ciena.com<mailto:krama...@ciena.com>> wrote:

Hi Alexis,
 I reverted the oom commit from head to:


git checkout cb02aa241edd97acb6c5ca744de84313f53e8a5a

Author: yuryn <yury.novit...@amdocs.com<mailto:yury.novit...@amdocs.com>>
Date:   Thu Dec 21 14:31:21 2017 +0200

Fix firefox tab crashes in VNC

Change-Id: Ie295257d98ddf32693309535e15c6ad9529f10fc
Issue-ID: OOM-531


Everything works with service creation, vnf and vf creates!
Please note that I am running with dcae disabled.
Something is broken with dcae disabled in the latest.
100% reproducible with service distribution step through operator taking a 
policy exception mailed earlier.
Have a nice weekend.

Regards,
-Karthick






From: Ramanarayanan, Karthick
Sent: Friday, January 19, 2018 8:48:23 AM
To: Alexis de Talhouët
Cc: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: Re: [**EXTERNAL**] Re: [onap-discuss] Service distribution error on 
latest ONAP/OOM

Hi Alexis,
 I did check the policy pod logs before sending the mail.
 I didn't see anything suspicious.
 I initially suspected aai-service dns not getting resolved but you seem to 
have fixed it
 and it was accessible from policy pod.
 Nothing suspicious from any log anywhere.
 I did see that the health check on sdc pods returned all UP except: DE 
component whose health check was down.
 Not sure if its anyway related. Could be benign.


curl 
http://127.0.0.1:30206/sdc1/rest/healthCheck<https://urldefense.proofpoint.com/v2/url?u=http-3A__127.0.0.1-3A30206_sdc1_rest_healthCheck=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=f9rBnbJzOXBaXfGkT0Vpe42EeDuUacO5Bd5YM2KscZ8=kw5keI716zhaX2yeaToDMAmWzlSoL2jdAfJg075jS6A=>
{
 "sdcVersion": "1.1.0",
 "siteMode": "unknown",
 "componentsInfo": [
   {
 "healthCheckComponent": "BE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 &

Re: [onap-discuss] [**EXTERNAL**] Re: Demo update-vfw policy script when running without closed loop

2018-01-25 Thread Ramanarayanan, Karthick
I am running 1.1.1 OOM.


With the update to /dockerdata-nfs push-policy script change suggested by Marco 
to pull from amsterdam, it didn't work with 1.1. ( ?h=amsterdam change)


I still see the GET error on update policy run after restarting the policy 
pods, confirming policy script was updated (BRCM dependency version matches 
1.1.1)


Logging into the pdp pod to check /var/log/onap/pdp rest logs, shows a bunch of 
errors which seem to indicate policy deployment issues mailed by Hernandez in 
the context of 1.1.2 though I am running 1.1.1.

Interesting.


I will just move to OOM 1.1.3 with a pull before retrying since the service 
distribution issue along with this issue pointed by Marco has been fixed.


Another thing I wanted to point out that restarting ONAP k8s pods with 
/dockerdata-nfs data present takes a long time with create-config.


I know its not required to re-create onap-config if shared data is already 
present but I think its better to change the brute force find approach in 
config-init scripts to avoid touching database files (huge files when 
/dockerdata-nfs already exists) that takes forever to run the config sed update.


Maybe something like this in config-init.sh for all the finds:


find /config-init/$NAMESPACE/ -type f -exec sed -i -e 
"s/\.onap-/\.$NAMESPACE-/g" {} \;


Could be changed to:


find /config-init/$NAMESPACE/ -path */conf/* -type f -exec sed -i -e 
"s/\.onap-/\.$NAMESPACE-/g" {} \;


so it targets only conf locations.


Anyway will update OOM to the latest amsterdam which I presume moves images to 
1.1.3 and retry this.



Regards,

-Karthick



From: Alexis de Talhouët <adetalhoue...@gmail.com>
Sent: Thursday, January 25, 2018 12:00:36 PM
To: HERNANDEZ-HERRERO, JORGE
Cc: Ramanarayanan, Karthick; onap-discuss@lists.onap.org
Subject: [**EXTERNAL**] Re: [onap-discuss] Demo update-vfw policy script when 
running without closed loop

Finish testing, it does fix the issue. Thanks,
Alexis

On Jan 25, 2018, at 2:28 PM, Alexis de Talhouët 
<adetalhoue...@gmail.com<mailto:adetalhoue...@gmail.com>> wrote:

Just put up a fix for this: 
https://gerrit.onap.org/r/#/c/29209/<https://urldefense.proofpoint.com/v2/url?u=https-3A__gerrit.onap.org_r_-23_c_29209_=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=tSozxmdVdWRPEu-VVg5W8GPZRVdlDF6rNcLEDuI2sT0=CD7KkXWCkl21E9HBM4YgVy3zQB04xeBot60UMzmtE6w=>
 Trying it now.

On Jan 25, 2018, at 2:13 PM, Alexis de Talhouët 
<adetalhoue...@gmail.com<mailto:adetalhoue...@gmail.com>> wrote:

This is tracked by 
https://jira.onap.org/browse/OOM-611<https://urldefense.proofpoint.com/v2/url?u=https-3A__jira.onap.org_browse_OOM-2D611=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=tSozxmdVdWRPEu-VVg5W8GPZRVdlDF6rNcLEDuI2sT0=b0i_-u5XHMG1sPVJqAzZQ_uiA3O0Dels6DPu2E1ad_c=>

Currently, OOM uses 1.1.1

Alexis

On Jan 25, 2018, at 2:07 PM, HERNANDEZ-HERRERO, JORGE 
<jh1...@att.com<mailto:jh1...@att.com>> wrote:

Hi Karthick,
With regards to policy, cannot tell you in the context of kubernetes, but we 
had a bug introduced in v1.1.2 policy docker images that was resolved with 
v1.1.3. The error shown in the text below indicates that the automated 
deployment of operational policies wasn’t successful which was one of the 
symptoms of the problems with v1.1.2.   First thing to check is if you are 
using the latest 1.1.3 images and latest amsterdam policy/docker repo?
Jorge

From: 
onap-discuss-boun...@lists.onap.org<mailto:onap-discuss-boun...@lists.onap.org> 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of Ramanarayanan, 
Karthick
Sent: Thursday, January 25, 2018 12:33 PM
To: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: [onap-discuss] Demo update-vfw policy script when running without 
closed loop

Hi,
 In my kubernetes setup minus dcae (just vFW without closed loop),
 I am trying to mount the appc packetgen interface but I am unable to see the 
mounts in appc mounts list.
 The policy that was pushed used the kubernetes update-vfw-op-policy.sh script 
which seems to be applicable for closed loop.

 Though the policy script runs and applies the policy and restarts the policy 
pods, the get on controlloop.Params fails at the end.

 curl -v --silent --user @1b3rt:31nst31n -X 
GEThttp://$<https://urldefense.proofpoint.com/v2/url?u=http-3A__-24=DwQFAw=LFYZ-o9_HUMeMTSQicvjIg=AOclne09odx6cmeimzFUhQ=Bd5eoa3uty8tqL8D1Fb0okmmu11-xtxLkLjCshykLdc=WCU7ohmqRfVOzWP7sz-1iJpYWrm20PbhK4fXhDV_v-Y=>{K8S_HOST}:${POLICY_DROOLS_PORT}/policy/pdp/engine/controllers/amsterdam/drools/facts/closedloop-amsterdam/org.onap.policy.controlloop.Params
  | python -m json.tool

{
"error": "amsterdam:closedloop-amsterdam:org.onap.policy.controlloop.Params 
not found"
}


Moving ahead, I configure the packet gen interface with an appc put to network 
topol

Re: [onap-discuss] [**EXTERNAL**] Service distribution error on latest ONAP/OOM

2018-01-25 Thread Ramanarayanan, Karthick
Thanks Alexis.

Fix is looking good but I haven't moved up yet.

Will do later.

Regards,

-Karthick


From: Alexis de Talhouët <adetalhoue...@gmail.com>
Sent: Tuesday, January 23, 2018 8:55:06 AM
To: Ramanarayanan, Karthick
Cc: onap-discuss@lists.onap.org; Bainbridge, David
Subject: Re: [**EXTERNAL**] [onap-discuss] Service distribution error on latest 
ONAP/OOM

Karthick,

The fix is out: 
https://gerrit.onap.org/r/#/c/28591/<https://urldefense.proofpoint.com/v2/url?u=https-3A__gerrit.onap.org_r_-23_c_28591_=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=75m0RjrhvqsPlEY75Unphh5etKxaOFSvRyoiAvYEscw=ba4Hsgaqc1z0WIqw6S2Vw8AB9sMCbSUPImAxxfkhmmA=>
 and has been tested.
Expect this to be merge on a couple of hours.

Please re-test and confirm it does fix your issue when you have time.

Regards,
Alexis

On Jan 22, 2018, at 11:57 AM, Ramanarayanan, Karthick 
<krama...@ciena.com<mailto:krama...@ciena.com>> wrote:

That's great Alexis.
Thanks.
(also don't be surprised if backend doesn't come up sometimes with no indicator 
in the log pods.
 Just restart cassandra, elastic search and kibana pod before restarting 
backend pod and it would load the user profiles in the sdc-be logs :)

Regards,
-Karthick

From: Alexis de Talhouët 
<adetalhoue...@gmail.com<mailto:adetalhoue...@gmail.com>>
Sent: Monday, January 22, 2018 5:10:26 AM
To: Ramanarayanan, Karthick
Cc: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>; 
Bainbridge, David
Subject: Re: [**EXTERNAL**] [onap-discuss] Service distribution error on latest 
ONAP/OOM

Hi Karthick,

Yes, I’m aware of this since you mentioned it last week. I reproduced the issue.
Currently implementing a fix for it. Sorry for the regression introduced.

See 
https://jira.onap.org/browse/OOM-608<https://urldefense.proofpoint.com/v2/url?u=https-3A__jira.onap.org_browse_OOM-2D608=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=f9rBnbJzOXBaXfGkT0Vpe42EeDuUacO5Bd5YM2KscZ8=Kfo_4eq50-zh0ubMpedUQQh4e6Uvr-2X4dpWAQcYYbw=>
 for more details.

Thanks,
Alexis

On Jan 19, 2018, at 4:21 PM, Ramanarayanan, Karthick 
<krama...@ciena.com<mailto:krama...@ciena.com>> wrote:

Hi Alexis,
 I reverted the oom commit from head to:


git checkout cb02aa241edd97acb6c5ca744de84313f53e8a5a

Author: yuryn <yury.novit...@amdocs.com<mailto:yury.novit...@amdocs.com>>
Date:   Thu Dec 21 14:31:21 2017 +0200

Fix firefox tab crashes in VNC

Change-Id: Ie295257d98ddf32693309535e15c6ad9529f10fc
Issue-ID: OOM-531


Everything works with service creation, vnf and vf creates!
Please note that I am running with dcae disabled.
Something is broken with dcae disabled in the latest.
100% reproducible with service distribution step through operator taking a 
policy exception mailed earlier.
Have a nice weekend.

Regards,
-Karthick






From: Ramanarayanan, Karthick
Sent: Friday, January 19, 2018 8:48:23 AM
To: Alexis de Talhouët
Cc: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: Re: [**EXTERNAL**] Re: [onap-discuss] Service distribution error on 
latest ONAP/OOM

Hi Alexis,
 I did check the policy pod logs before sending the mail.
 I didn't see anything suspicious.
 I initially suspected aai-service dns not getting resolved but you seem to 
have fixed it
 and it was accessible from policy pod.
 Nothing suspicious from any log anywhere.
 I did see that the health check on sdc pods returned all UP except: DE 
component whose health check was down.
 Not sure if its anyway related. Could be benign.


curl 
http://127.0.0.1:30206/sdc1/rest/healthCheck<https://urldefense.proofpoint.com/v2/url?u=http-3A__127.0.0.1-3A30206_sdc1_rest_healthCheck=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=f9rBnbJzOXBaXfGkT0Vpe42EeDuUacO5Bd5YM2KscZ8=kw5keI716zhaX2yeaToDMAmWzlSoL2jdAfJg075jS6A=>
{
 "sdcVersion": "1.1.0",
 "siteMode": "unknown",
 "componentsInfo": [
   {
 "healthCheckComponent": "BE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   },
   {
 "healthCheckComponent": "TITAN",
 "healthCheckStatus": "UP",
 "description": "OK"
   },
   {
 "healthCheckComponent": "DE",
 "healthCheckStatus": "DOWN",
 "description": "U-EB cluster is not available"
   },
   {
 "healthCheckComponent": "CASSANDRA",
 "healthCheckStatus": "UP",
 "description": "OK"
   },
   {
 "healthCheckComponent": "ON_BOARDING",
 "healthCheckStatus": "UP

[onap-discuss] Demo update-vfw policy script when running without closed loop

2018-01-25 Thread Ramanarayanan, Karthick
Hi,

 In my kubernetes setup minus dcae (just vFW without closed loop),

 I am trying to mount the appc packetgen interface but I am unable to see the 
mounts in appc mounts list.

 The policy that was pushed used the kubernetes update-vfw-op-policy.sh script 
which seems to be applicable for closed loop.


 Though the policy script runs and applies the policy and restarts the policy 
pods, the get on controlloop.Params fails at the end.


 curl -v --silent --user @1b3rt:31nst31n -X GET 
http://${K8S_HOST}:${POLICY_DROOLS_PORT}/policy/pdp/engine/controllers/amsterdam/drools/facts/closedloop-amsterdam/org.onap.policy.controlloop.Params
  | python -m json.tool


{
"error": "amsterdam:closedloop-amsterdam:org.onap.policy.controlloop.Params 
not found"
}



Moving ahead, I configure the packet gen interface with an appc put to network 
topology for the packetgen vnf/ip.

Put succeeds but appc mounts doesn't show up.

Wondering if the policy script needs to be changed when executing without 
closed loop?

What am I missing?


Thanks,

-Karthick
___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss


Re: [onap-discuss] [**EXTERNAL**] Service distribution error on latest ONAP/OOM

2018-01-22 Thread Ramanarayanan, Karthick
That's great Alexis.

Thanks.

(also don't be surprised if backend doesn't come up sometimes with no indicator 
in the log pods.

 Just restart cassandra, elastic search and kibana pod before restarting 
backend pod and it would load the user profiles in the sdc-be logs :)


Regards,

-Karthick


From: Alexis de Talhouët <adetalhoue...@gmail.com>
Sent: Monday, January 22, 2018 5:10:26 AM
To: Ramanarayanan, Karthick
Cc: onap-discuss@lists.onap.org; Bainbridge, David
Subject: Re: [**EXTERNAL**] [onap-discuss] Service distribution error on latest 
ONAP/OOM

Hi Karthick,

Yes, I’m aware of this since you mentioned it last week. I reproduced the issue.
Currently implementing a fix for it. Sorry for the regression introduced.

See 
https://jira.onap.org/browse/OOM-608<https://urldefense.proofpoint.com/v2/url?u=https-3A__jira.onap.org_browse_OOM-2D608=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=f9rBnbJzOXBaXfGkT0Vpe42EeDuUacO5Bd5YM2KscZ8=Kfo_4eq50-zh0ubMpedUQQh4e6Uvr-2X4dpWAQcYYbw=>
 for more details.

Thanks,
Alexis

On Jan 19, 2018, at 4:21 PM, Ramanarayanan, Karthick 
<krama...@ciena.com<mailto:krama...@ciena.com>> wrote:

Hi Alexis,
 I reverted the oom commit from head to:


git checkout cb02aa241edd97acb6c5ca744de84313f53e8a5a

Author: yuryn <yury.novit...@amdocs.com<mailto:yury.novit...@amdocs.com>>
Date:   Thu Dec 21 14:31:21 2017 +0200

Fix firefox tab crashes in VNC

Change-Id: Ie295257d98ddf32693309535e15c6ad9529f10fc
Issue-ID: OOM-531


Everything works with service creation, vnf and vf creates!
Please note that I am running with dcae disabled.
Something is broken with dcae disabled in the latest.
100% reproducible with service distribution step through operator taking a 
policy exception mailed earlier.
Have a nice weekend.

Regards,
-Karthick




________
From: Ramanarayanan, Karthick
Sent: Friday, January 19, 2018 8:48:23 AM
To: Alexis de Talhouët
Cc: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: Re: [**EXTERNAL**] Re: [onap-discuss] Service distribution error on 
latest ONAP/OOM

Hi Alexis,
 I did check the policy pod logs before sending the mail.
 I didn't see anything suspicious.
 I initially suspected aai-service dns not getting resolved but you seem to 
have fixed it
 and it was accessible from policy pod.
 Nothing suspicious from any log anywhere.
 I did see that the health check on sdc pods returned all UP except: DE 
component whose health check was down.
 Not sure if its anyway related. Could be benign.


curl 
http://127.0.0.1:30206/sdc1/rest/healthCheck<https://urldefense.proofpoint.com/v2/url?u=http-3A__127.0.0.1-3A30206_sdc1_rest_healthCheck=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=f9rBnbJzOXBaXfGkT0Vpe42EeDuUacO5Bd5YM2KscZ8=kw5keI716zhaX2yeaToDMAmWzlSoL2jdAfJg075jS6A=>
{
 "sdcVersion": "1.1.0",
 "siteMode": "unknown",
 "componentsInfo": [
   {
 "healthCheckComponent": "BE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   },
   {
 "healthCheckComponent": "TITAN",
 "healthCheckStatus": "UP",
 "description": "OK"
   },
   {
 "healthCheckComponent": "DE",
 "healthCheckStatus": "DOWN",
 "description": "U-EB cluster is not available"
   },
   {
 "healthCheckComponent": "CASSANDRA",
 "healthCheckStatus": "UP",
 "description": "OK"
   },
   {
 "healthCheckComponent": "ON_BOARDING",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK",
 "componentsInfo": [
   {
 "healthCheckComponent": "ZU",
 "healthCheckStatus": "UP",
 "version": "0.2.0",
 "description": "OK"
   },
   {
 "healthCheckComponent": "BE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   },
   {
 "healthCheckComponent": "CAS",
 "healthCheckStatus": "UP",
 "version": "2.1.17",
 "description": "OK"
   },
   {
 "healthCheckComponent": "FE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   }
 ]
   },
   {
  

Re: [onap-discuss] [**EXTERNAL**] Re: Service distribution error on latest ONAP/OOM

2018-01-19 Thread Ramanarayanan, Karthick
Hi Alexis,

 I reverted the oom commit from head to:


git checkout cb02aa241edd97acb6c5ca744de84313f53e8a5a

Author: yuryn <yury.novit...@amdocs.com>
Date:   Thu Dec 21 14:31:21 2017 +0200

Fix firefox tab crashes in VNC

Change-Id: Ie295257d98ddf32693309535e15c6ad9529f10fc
Issue-ID: OOM-531


Everything works with service creation, vnf and vf creates!
Please note that I am running with dcae disabled.
Something is broken with dcae disabled in the latest.
100% reproducible with service distribution step through operator taking a 
policy exception mailed earlier.
Have a nice weekend.

Regards,
-Karthick





From: Ramanarayanan, Karthick
Sent: Friday, January 19, 2018 8:48:23 AM
To: Alexis de Talhouët
Cc: onap-discuss@lists.onap.org
Subject: Re: [**EXTERNAL**] Re: [onap-discuss] Service distribution error on 
latest ONAP/OOM


Hi Alexis,

 I did check the policy pod logs before sending the mail.

 I didn't see anything suspicious.

 I initially suspected aai-service dns not getting resolved but you seem to 
have fixed it

 and it was accessible from policy pod.

 Nothing suspicious from any log anywhere.

 I did see that the health check on sdc pods returned all UP except: DE 
component whose health check was down.

 Not sure if its anyway related. Could be benign.


curl http://127.0.0.1:30206/sdc1/rest/healthCheck
{
 "sdcVersion": "1.1.0",
 "siteMode": "unknown",
 "componentsInfo": [
   {
 "healthCheckComponent": "BE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   },
   {
 "healthCheckComponent": "TITAN",
 "healthCheckStatus": "UP",
 "description": "OK"
   },
   {
 "healthCheckComponent": "DE",
 "healthCheckStatus": "DOWN",
 "description": "U-EB cluster is not available"
   },
   {
 "healthCheckComponent": "CASSANDRA",
 "healthCheckStatus": "UP",
 "description": "OK"
   },
   {
 "healthCheckComponent": "ON_BOARDING",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK",
 "componentsInfo": [
   {
 "healthCheckComponent": "ZU",
 "healthCheckStatus": "UP",
 "version": "0.2.0",
 "description": "OK"
   },
   {
 "healthCheckComponent": "BE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   },
   {
 "healthCheckComponent": "CAS",
 "healthCheckStatus": "UP",
 "version": "2.1.17",
 "description": "OK"
   },
   {
 "healthCheckComponent": "FE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   }
 ]
   },
   {
 "healthCheckComponent": "FE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   }
 ]



On some occasions backend doesn't come up even though pods are running.

(seen on other nodes running onap and was there even without your changes. Logs 
indicated nothing.

But if I restart the sdc pods for cassandra, elastic search and kibana before 
backend restart, backend starts responding and ends up creating the user 
profile entries for the various user roles for onap as seen in logs. But this 
is unrelated to this service distribution error as backend is up.)

)



Regards,

-Karthick



From: Alexis de Talhouët <adetalhoue...@gmail.com>
Sent: Friday, January 19, 2018 4:54 AM
To: Ramanarayanan, Karthick
Cc: onap-discuss@lists.onap.org
Subject: [**EXTERNAL**] Re: [onap-discuss] Service distribution error on latest 
ONAP/OOM

Hi,

Could you look at the log of Policy for errors, for that you need to go in the 
pod themselves, under /var/log/onap.
You could do the same for SDC container (backend).
The thing that could have affect Policy is the fact we removed the persisted 
data of mariadb, because it was bogus 
(https://gerrit.onap.org/r/#/c/27521/<https://urldefense.proofpoint.com/v2/url?u=https-3A__gerrit.onap.org_r_-23_c_27521_=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=KvVcZ86B4A84GbAp761jI6ct9mjN5qBKlllOw238TUU=AxY6GD58MW0vVFUppssl7HvOycADxAmnLx

Re: [onap-discuss] [**EXTERNAL**] Re: Service distribution error on latest ONAP/OOM

2018-01-19 Thread Ramanarayanan, Karthick
FWIW, this is the log from policy drools pod. I didn't think it was suspicious 
or related.

But here you go for the topic not found error log that keeps coming every 15 
seconds.

Probably not related to distribution error:


[2018-01-19 
16:50:28,582|WARN|CambriaConsumerImpl|UEB-source-unauthenticated.DCAE_CL_OUTPUT]
 Topic not found: 
/events/unauthenticated.DCAE_CL_OUTPUT/df217580-32bf-4ec5-bdd8-55971c20ad31/0?timeout=15000=100
[2018-01-19 
16:50:43,586|WARN|CambriaConsumerImpl|UEB-source-unauthenticated.DCAE_CL_OUTPUT]
 Topic not found: 
/events/unauthenticated.DCAE_CL_OUTPUT/df217580-32bf-4ec5-bdd8-55971c20ad31/0?timeout=15000=100
[2018-01-19 16:50:43,586|WARN|CambriaConsumerImpl|UEB-source-APPC-LCM-WRITE] 
Topic not found: 
/events/APPC-LCM-WRITE/91345324-2bae-47c0-94cb-cc0bc8229231/0?timeout=15000=100
[2018-01-19 16:50:58,589|WARN|CambriaConsumerImpl|UEB-source-APPC-LCM-WRITE] 
Topic not found: 
/events/APPC-LCM-WRITE/91345324-2bae-47c0-94cb-cc0bc8229231/0?timeout=15000=100
[2018-01-19 
16:50:58,589|WARN|CambriaConsumerImpl|UEB-source-unauthenticated.DCAE_CL_OUTPUT]
 Topic not found: 
/events/unauthenticated.DCAE_CL_OUTPUT/df217580-32bf-4ec5-bdd8-55971c20ad31/0?timeout=15000=100
[2018-01-19 16:51:13,593|WARN|CambriaConsumerImpl|UEB-source-APPC-LCM-WRITE] 
Topic not found: 
/events/APPC-LCM-WRITE/91345324-2bae-47c0-94cb-cc0bc8229231/0?timeout=15000=100
[2018-01-19 
16:51:13,593|WARN|CambriaConsumerImpl|UEB-source-unauthenticated.DCAE_CL_OUTPUT]
 Topic not found: 
/events/unauthenticated.DCAE_CL_OUTPUT/df217580-32bf-4ec5-bdd8-55971c20ad31/0?timeout=15000=100


Regards,

-Karthick


From: Ramanarayanan, Karthick
Sent: Friday, January 19, 2018 8:48:23 AM
To: Alexis de Talhouët
Cc: onap-discuss@lists.onap.org
Subject: Re: [**EXTERNAL**] Re: [onap-discuss] Service distribution error on 
latest ONAP/OOM


Hi Alexis,

 I did check the policy pod logs before sending the mail.

 I didn't see anything suspicious.

 I initially suspected aai-service dns not getting resolved but you seem to 
have fixed it

 and it was accessible from policy pod.

 Nothing suspicious from any log anywhere.

 I did see that the health check on sdc pods returned all UP except: DE 
component whose health check was down.

 Not sure if its anyway related. Could be benign.


curl http://127.0.0.1:30206/sdc1/rest/healthCheck
{
 "sdcVersion": "1.1.0",
 "siteMode": "unknown",
 "componentsInfo": [
   {
 "healthCheckComponent": "BE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   },
   {
 "healthCheckComponent": "TITAN",
 "healthCheckStatus": "UP",
 "description": "OK"
   },
   {
 "healthCheckComponent": "DE",
 "healthCheckStatus": "DOWN",
 "description": "U-EB cluster is not available"
   },
   {
 "healthCheckComponent": "CASSANDRA",
 "healthCheckStatus": "UP",
 "description": "OK"
   },
   {
 "healthCheckComponent": "ON_BOARDING",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK",
 "componentsInfo": [
   {
 "healthCheckComponent": "ZU",
 "healthCheckStatus": "UP",
 "version": "0.2.0",
 "description": "OK"
   },
   {
 "healthCheckComponent": "BE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   },
   {
 "healthCheckComponent": "CAS",
 "healthCheckStatus": "UP",
 "version": "2.1.17",
 "description": "OK"
   },
   {
 "healthCheckComponent": "FE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   }
 ]
   },
   {
 "healthCheckComponent": "FE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   }
 ]



On some occasions backend doesn't come up even though pods are running.

(seen on other nodes running onap and was there even without your changes. Logs 
indicated nothing.

But if I restart the sdc pods for cassandra, elastic search and kibana before 
backend restart, backend starts responding and ends up creating the user 
profile entries for the 

Re: [onap-discuss] [**EXTERNAL**] Re: Service distribution error on latest ONAP/OOM

2018-01-19 Thread Ramanarayanan, Karthick
Hi Alexis,

 I did check the policy pod logs before sending the mail.

 I didn't see anything suspicious.

 I initially suspected aai-service dns not getting resolved but you seem to 
have fixed it

 and it was accessible from policy pod.

 Nothing suspicious from any log anywhere.

 I did see that the health check on sdc pods returned all UP except: DE 
component whose health check was down.

 Not sure if its anyway related. Could be benign.


curl http://127.0.0.1:30206/sdc1/rest/healthCheck
{
 "sdcVersion": "1.1.0",
 "siteMode": "unknown",
 "componentsInfo": [
   {
 "healthCheckComponent": "BE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   },
   {
 "healthCheckComponent": "TITAN",
 "healthCheckStatus": "UP",
 "description": "OK"
   },
   {
 "healthCheckComponent": "DE",
 "healthCheckStatus": "DOWN",
 "description": "U-EB cluster is not available"
   },
   {
 "healthCheckComponent": "CASSANDRA",
 "healthCheckStatus": "UP",
 "description": "OK"
   },
   {
 "healthCheckComponent": "ON_BOARDING",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK",
 "componentsInfo": [
   {
 "healthCheckComponent": "ZU",
 "healthCheckStatus": "UP",
 "version": "0.2.0",
 "description": "OK"
   },
   {
 "healthCheckComponent": "BE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   },
   {
 "healthCheckComponent": "CAS",
 "healthCheckStatus": "UP",
 "version": "2.1.17",
 "description": "OK"
   },
   {
 "healthCheckComponent": "FE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   }
 ]
   },
   {
 "healthCheckComponent": "FE",
 "healthCheckStatus": "UP",
 "version": "1.1.0",
 "description": "OK"
   }
 ]



On some occasions backend doesn't come up even though pods are running.

(seen on other nodes running onap and was there even without your changes. Logs 
indicated nothing.

But if I restart the sdc pods for cassandra, elastic search and kibana before 
backend restart, backend starts responding and ends up creating the user 
profile entries for the various user roles for onap as seen in logs. But this 
is unrelated to this service distribution error as backend is up.)

)



Regards,

-Karthick



From: Alexis de Talhouët <adetalhoue...@gmail.com>
Sent: Friday, January 19, 2018 4:54 AM
To: Ramanarayanan, Karthick
Cc: onap-discuss@lists.onap.org
Subject: [**EXTERNAL**] Re: [onap-discuss] Service distribution error on latest 
ONAP/OOM

Hi,

Could you look at the log of Policy for errors, for that you need to go in the 
pod themselves, under /var/log/onap.
You could do the same for SDC container (backend).
The thing that could have affect Policy is the fact we removed the persisted 
data of mariadb, because it was bogus 
(https://gerrit.onap.org/r/#/c/27521/<https://urldefense.proofpoint.com/v2/url?u=https-3A__gerrit.onap.org_r_-23_c_27521_=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=KvVcZ86B4A84GbAp761jI6ct9mjN5qBKlllOw238TUU=AxY6GD58MW0vVFUppssl7HvOycADxAmnLxX8lBIQnHQ=>).
 But I doubt it does explain your issue.
Beside that, nothing having a potential disruptive effect happen to policy.
The DCAE work was well tested before it got merged. I’ll re-test sometime today 
or early next week to make sure nothing has slept through the crack.

Thanks,
Alexis

On Jan 18, 2018, at 11:44 PM, Ramanarayanan, Karthick 
<krama...@ciena.com<mailto:krama...@ciena.com>> wrote:


Hi,
 Trying to distribute a demo firewall service instance on a kubernetes host 
running ONAP, I am seeing a new policy exception error on the latest oom on 
amsterdam.
(dcae deploy is false and disableDcae is true)

Error code: POL5000
Status code: 500
Internal Server Error. Please try again later.


All pods are up. Health check seems to be fine on all pods.
k8s pod logs don't seem to reveal anything and this happens consistently 
whenever I try to distribute t

[onap-discuss] Service distribution error on latest ONAP/OOM

2018-01-18 Thread Ramanarayanan, Karthick
Hi,
 Trying to distribute a demo firewall service instance on a kubernetes host 
running ONAP, I am seeing a new policy exception error on the latest oom on 
amsterdam.
(dcae deploy is false and disableDcae is true)

Error code: POL5000
Status code: 500
Internal Server Error. Please try again later.


All pods are up. Health check seems to be fine on all pods.

k8s pod logs don't seem to reveal anything and this happens consistently 
whenever I try to distribute the service as an operator.


It was working fine last week.

Even yesterday I didn't get this error though I got a different one related 
createVnfInfra notify exception on SO vnf create workflow step but that was a 
different failure than this.


After the dcae config changes got merged, this service distribution error seems 
to have popped up. (dcae is disabled for my setup)


What am I missing?


Thanks,

-Karthick
___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss


Re: [onap-discuss] [**EXTERNAL**] Re: Failure to create the demo vfirewall VNF

2018-01-09 Thread Ramanarayanan, Karthick
Inline ...



From: Alexis de Talhouët <adetalhoue...@gmail.com>
Sent: Tuesday, January 9, 2018 10:40 AM
To: Ramanarayanan, Karthick
Cc: BRIAN D FREEMAN; onap-discuss@lists.onap.org
Subject: [**EXTERNAL**] Re: [onap-discuss] Failure to create the demo vfirewall 
VNF

Hi,

Please refer to

https://wiki.onap.org/display/DW/ONAP+on+Kubernetes+on+Rancher+in+OpenStack<https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.onap.org_display_DW_ONAP-2Bon-2BKubernetes-2Bon-2BRancher-2Bin-2BOpenStack=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=nhSSpq9qyfVrRqRw6aGAqAvQkaYQpNeoqFCoNmTOtGA=Y1p0bXPNmUj2USEO3iMzMSxeZ_0GCyZSumtYszxlwaE=>


for setting up OOM on Kubernetes, on Racnher, in OpenStack. Please discard the 
update wrt DCAE (I said it’s merge, but not just yet).


Yes. It was using that doc, the rancher, k8s setup was brought up for onap.
I have to check back on DCAE and running it without dcae on k8s again though!


Regards,
-Karthick

And please refer to

https://wiki.onap.org/display/DW/vFWCL+instantiation%2C+testing%2C+and+debuging<https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.onap.org_display_DW_vFWCL-2Binstantiation-2C-2Btesting-2C-2Band-2Bdebuging=DwMFaQ=06gGS5mmTNpWnXkc0ACHoA=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8=nhSSpq9qyfVrRqRw6aGAqAvQkaYQpNeoqFCoNmTOtGA=vlqoBTwjKDPKZkk1e_KDSOX76mCerT0EooAevm7kJcI=>

for running vFWCL use case. Of course, without DCAE, you won’t have close loop.




Thanks,
Alexis


On Jan 9, 2018, at 1:36 PM, Ramanarayanan, Karthick 
<krama...@ciena.com<mailto:krama...@ciena.com>> wrote:

Yes.
I checked at sdnc pods, vnf pods, mso pods.
Found nothing pertaining to the error.
A sample sdnc pod check (apart from other pods)
kubectl -n onap-sdnc -c sdnc-controller-container logs sdnc-1395102659-w48gp

Now I have to redo this test again as I am running out of memory here.
Note that this is a 14 core(28 threads for 56 total in cpuinfo),
128 gig ram. (Intel Xeon E5-2683 v3)
Before I ran the OOM, I had /proc/sys/vm/drop_caches done to just start with a 
free memory of over
120 gigs. (After clearing the kernel buffer caches for a big head start for 
ONAP :)

Then the kubernetes setup to relaunch onap/rancher,etc.

These have been up for over 3-4 days now and now my free memory is almost down 
to 30 gig
 on the caches side. Free memory minus caches is a meagre 800 megs.

Now suddenly the portals are unresponsive.

The sdc frontend has stopped responding with gateway timeouts though I am fire 
a curl sdc2/rest/version to the backend. Frontend is dead!

Its a cluster fuck now.

I will restart the onap setup again before redoing the demo vfwcl instance 
create.
(also with delete rollback set to false during vf create as Marco suggested)

Hopefully you have tested a demo instance creation on a k8s setup with OOM on 
amsterdam.
 (not with ONAP running on openstack as the demo video shows)

Thanks,
-Karthick


From: FREEMAN, BRIAN D <bf1...@att.com<mailto:bf1...@att.com>>
Sent: Tuesday, January 9, 2018 5:41:48 AM
To: Ramanarayanan, Karthick; 
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: [**EXTERNAL**] RE: Failure to create the demo vfirewall VNF

You need to look at SO logs for the SdncAdapter and VnfAdapter and see what 
error you are getting from SDNC or Openstack in SO.



Usually a delete implies that SO talks to SDNC correctly but then the 
interaction with Openstack fails or when SO goes to SDNC or AAI for data to 
create the Openstack heat stack create that it is failing.



Brian





From: 
onap-discuss-boun...@lists.onap.org<mailto:onap-discuss-boun...@lists.onap.org> 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of Ramanarayanan, 
Karthick
Sent: Monday, January 08, 2018 6:33 PM
To: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: [onap-discuss] Failure to create the demo vfirewall VNF



Hi,
 I am trying to instantiate the demo vfirewall-vsink service instance based on 
the video here:



 
https://wiki.onap.org/display/DW/Running+the+ONAP+Demos?preview=/1015891/16010290/vFW_closed_loop.mp4#RunningtheONAPDemos-VNFOnboarding,Instantiation,andClosed-loopOperations<https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.onap.org_display_DW_Running-2Bthe-2BONAP-2BDemos-3Fpreview-3D_1015891_16010290_vFW-5Fclosed-5Floop.mp4-23RunningtheONAPDemos-2DVNFOnboarding-2CInstantiation-2CandClosed-2DloopOperations=DwMFAw=LFYZ-o9_HUMeMTSQicvjIg=e3d1ehx3DI5AoMgDmi2Fzw=VTydHcis8-MU0ztKmrS3--wrND4fGUtoupQgN4GyHZg=sHFlwt9oUg4ii_TV8umfzXphNLbVnoPEvv0hj8YTj7s=>



 While I am able to move ahead and reach till the vf create step, the add VF 
step
 for both vfirewall and packet gen instance results in failure and moves to 
pending delete.
 (sdnc preload checkbox enabled)



 The setup I have is OOM configured for kubernetes.
 I am on amsterdam release branch for O

Re: [onap-discuss] Failure to create the demo vfirewall VNF

2018-01-09 Thread Ramanarayanan, Karthick
Yes.

I checked at sdnc pods, vnf pods, mso pods.

Found nothing pertaining to the error.

A sample sdnc pod check (apart from other pods)

kubectl -n onap-sdnc -c sdnc-controller-container logs sdnc-1395102659-w48gp


Now I have to redo this test again as I am running out of memory here.

Note that this is a 14 core(28 threads for 56 total in cpuinfo),

128 gig ram. (Intel Xeon E5-2683 v3)

Before I ran the OOM, I had /proc/sys/vm/drop_caches done to just start with a 
free memory of over

120 gigs. (After clearing the kernel buffer caches for a big head start for 
ONAP :)


Then the kubernetes setup to relaunch onap/rancher,etc.


These have been up for over 3-4 days now and now my free memory is almost down 
to 30 gig

 on the caches side. Free memory minus caches is a meagre 800 megs.


Now suddenly the portals are unresponsive.


The sdc frontend has stopped responding with gateway timeouts though I am fire 
a curl sdc2/rest/version to the backend. Frontend is dead!


Its a cluster fuck now.


I will restart the onap setup again before redoing the demo vfwcl instance 
create.

(also with delete rollback set to false during vf create as Marco suggested)


Hopefully you have tested a demo instance creation on a k8s setup with OOM on 
amsterdam.

 (not with ONAP running on openstack as the demo video shows)


Thanks,

-Karthick



From: FREEMAN, BRIAN D <bf1...@att.com>
Sent: Tuesday, January 9, 2018 5:41:48 AM
To: Ramanarayanan, Karthick; onap-discuss@lists.onap.org
Subject: [**EXTERNAL**] RE: Failure to create the demo vfirewall VNF


You need to look at SO logs for the SdncAdapter and VnfAdapter and see what 
error you are getting from SDNC or Openstack in SO.



Usually a delete implies that SO talks to SDNC correctly but then the 
interaction with Openstack fails or when SO goes to SDNC or AAI for data to 
create the Openstack heat stack create that it is failing.



Brian





From: onap-discuss-boun...@lists.onap.org 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of Ramanarayanan, 
Karthick
Sent: Monday, January 08, 2018 6:33 PM
To: onap-discuss@lists.onap.org
Subject: [onap-discuss] Failure to create the demo vfirewall VNF



Hi,

 I am trying to instantiate the demo vfirewall-vsink service instance based on 
the video here:



 
https://wiki.onap.org/display/DW/Running+the+ONAP+Demos?preview=/1015891/16010290/vFW_closed_loop.mp4#RunningtheONAPDemos-VNFOnboarding,Instantiation,andClosed-loopOperations<https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.onap.org_display_DW_Running-2Bthe-2BONAP-2BDemos-3Fpreview-3D_1015891_16010290_vFW-5Fclosed-5Floop.mp4-23RunningtheONAPDemos-2DVNFOnboarding-2CInstantiation-2CandClosed-2DloopOperations=DwMFAw=LFYZ-o9_HUMeMTSQicvjIg=e3d1ehx3DI5AoMgDmi2Fzw=VTydHcis8-MU0ztKmrS3--wrND4fGUtoupQgN4GyHZg=sHFlwt9oUg4ii_TV8umfzXphNLbVnoPEvv0hj8YTj7s=>



 While I am able to move ahead and reach till the vf create step, the add VF 
step

 for both vfirewall and packet gen instance results in failure and moves to 
pending delete.

 (sdnc preload checkbox enabled)



 The setup I have is OOM configured for kubernetes.

 I am on amsterdam release branch for OOM.

So its not using openstack heat templates

 to instantiate ONAP. Rather just a single node configured with k8s oneclick.



 The difference in the video pertains to running demo-k8s.sh init from 
oom/kubernetes/robot

 to instantiate the customer models and services after copying the demo/heat

 to kubernetes robot /share directory for the distribute the work.



 The demo vfirewall closed loop service seems to be distributed as seen in the 
ONAP portal.

 ( I have also tried redistributing)

 I am trying to instantiate the service for demo vFWCL.



 Logs indicate nothing in sdnc after the vF create failure.

 I don't see any requests hitting Openstack logs either.



 The VNF preload operation seems to have succeeded for the firewall closed loop 
instances.

 The VNF profiles are also present in sdnc.



 I get back a http success (status code 200) with the request id as expected.

 A subsequent GET request for vnf topology information preloaded also works.



 Doing a POST using sdnc api (port 8282) works for the vnf topology preload 
step.

 But I see nothing in the logs for sdnc pods. (nothing relevant in sdc/mso pod 
either if that matters)



 Here is a snippet from curl to sdnc as well:



curl -vX POST 
http://admin:Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U@10.43.172.252:8282/restconf/operations/VNF-API:preload-vnf-topology-operation<https://urldefense.proofpoint.com/v2/url?u=http-3A__admin-3AKp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U-4010.43.172.252-3A8282_restconf_operations_VNF-2DAPI-3Apreload-2Dvnf-2Dtopology-2Doperation=DwQFAw=LFYZ-o9_HUMeMTSQicvjIg=e3d1ehx3DI5AoMgDmi2Fzw=VTydHcis8-MU0ztKmrS3--wrND4fGUtoupQgN4GyHZg=Y3ruJ6Ym6EbR64sGaBOkI3m5Zw_Jgydl8PWX83NzX58=>
 -d @firewall.json --header "Content-Type: application/json"



Re: [onap-discuss] Failure to create the demo vfirewall VNF

2018-01-08 Thread Ramanarayanan, Karthick
Hi Marco,


Thanks for the response.

Comments inline.


I have retried now with the changes to demo versions but doesn't change the 
equation.

Same status as before.


More inline ...


>What error are you getting precisely?


I get maximum poll attempts exceeded.

I thought that's a benign error for the GUI waiting to refresh on request

 status.

But after that, refreshing the service instance list, moves the VF to pending 
delete.


(also unable to delete once in this state as mentioned earlier. No hint in 
sdnc/mso/vid portal logs whatsoever)


Here's a snapshot for your perusal:


https://www.dropbox.com/s/z6b69aq40k6hw61/Screenshot%202018-01-08%2018.25.06.png?dl=



> How did you set onap-parameters.yaml in OOM? The OpenStack parameters are 
> used by SO to instantiate the VNFs.


Yes. I am aware about that. As without that, model init through demo-k8s robot 
would have failed anyway.


Here's the onap-parameters.yaml for our k8s environment under 
oom/kubernetes/config:


#cat onap-parameters.yaml
--

OPENSTACK_UBUNTU_14_IMAGE: "Ubuntu_14.04.5_LTS"
OPENSTACK_PUBLIC_NET_ID: "aff52391-359f-4264-9768-1948c83021bc"
OPENSTACK_OAM_NETWORK_ID: "abd45e55-bbac-4ec4-84cf-38479b1cf075"
OPENSTACK_OAM_SUBNET_ID: "28222064-0f44-4758-94c5-72eaaf726837"
OPENSTACK_OAM_NETWORK_CIDR: "10.0.0.0/26"
OPENSTACK_USERNAME: "onap"
OPENSTACK_API_KEY: "secret"
OPENSTACK_TENANT_NAME: "onap"
OPENSTACK_TENANT_ID: "45947f2541a34498b340eb93b56b5c24"
OPENSTACK_REGION: "RegionOne"
OPENSTACK_KEYSTONE_URL: "http://192.168.42.240/identity;
OPENSTACK_FLAVOUR_MEDIUM: "m1.small"
OPENSTACK_SERVICE_TENANT_NAME: "service"
DMAAP_TOPIC: "AUTO"
DEMO_ARTIFACTS_VERSION: "1.1.0-SNAPSHOT"





Maybe the DEMO artifacts version above needs to be 1.1.1 as I haven't changed 
that yet.

Can try with that if you think that's the issue.



>Also, note that for Amsterdam release, the following parameters should be used:


Done. Posted the vnf preload topology operation with demo artifacts 1.1.1 and 
install 1.1.1.

Confirmed with a GET as well to see the new posted data for the vnfs.

Same error as mentioned above.


 Regards,

-Karthick



From: <onap-discuss-boun...@lists.onap.org> on behalf of "Ramanarayanan, 
Karthick" <krama...@ciena.com>
Date: Monday, January 8, 2018 at 6:32 PM
To: "onap-discuss@lists.onap.org" <onap-discuss@lists.onap.org>
Subject: [onap-discuss] Failure to create the demo vfirewall VNF



{

"vnf-parameter-name": "demo_artifacts_version",

"vnf-parameter-value": "1.1.0"

},

{

"vnf-parameter-name": "install_script_version",

"vnf-parameter-value": "1.1.0-SNAPSHOT"
___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss


[onap-discuss] Failure to create the demo vfirewall VNF

2018-01-08 Thread Ramanarayanan, Karthick
Hi,

 I am trying to instantiate the demo vfirewall-vsink service instance based on 
the video here:



 
https://wiki.onap.org/display/DW/Running+the+ONAP+Demos?preview=/1015891/16010290/vFW_closed_loop.mp4#RunningtheONAPDemos-VNFOnboarding,Instantiation,andClosed-loopOperations


 While I am able to move ahead and reach till the vf create step, the add VF 
step

 for both vfirewall and packet gen instance results in failure and moves to 
pending delete.

 (sdnc preload checkbox enabled)


 The setup I have is OOM configured for kubernetes.

 I am on amsterdam release branch for OOM.

So its not using openstack heat templates

 to instantiate ONAP. Rather just a single node configured with k8s oneclick.


 The difference in the video pertains to running demo-k8s.sh init from 
oom/kubernetes/robot

 to instantiate the customer models and services after copying the demo/heat

 to kubernetes robot /share directory for the distribute the work.


 The demo vfirewall closed loop service seems to be distributed as seen in the 
ONAP portal.

 ( I have also tried redistributing)

 I am trying to instantiate the service for demo vFWCL.


 Logs indicate nothing in sdnc after the vF create failure.

 I don't see any requests hitting Openstack logs either.


 The VNF preload operation seems to have succeeded for the firewall closed loop 
instances.

 The VNF profiles are also present in sdnc.


 I get back a http success (status code 200) with the request id as expected.

 A subsequent GET request for vnf topology information preloaded also works.


 Doing a POST using sdnc api (port 8282) works for the vnf topology preload 
step.

 But I see nothing in the logs for sdnc pods. (nothing relevant in sdc/mso pod 
either if that matters)


 Here is a snippet from curl to sdnc as well:


curl -vX POST 
http://admin:Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U@10.43.172.252:8282/restconf/operations/VNF-API:preload-vnf-topology-operation
 -d @firewall.json --header "Content-Type: application/json"


Response status is http 200 as expected.

{
 "output": {
   "svc-request-id": "robot12",
   "response-code": "200",
   "ack-final-indicator": "Y"
 }
}


Trying to do a get request for the vnf topology preload information POST 
request above also returns expected information:


curl -sSL 
http://admin:Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U@10.43.172.252:8282/restconf/config/VNF-API:preload-vnfs/vnf-preload-list/vfirewall-1/3d6f8762Fea8453fBd9a..base_vfw..module-0/preload-data/vnf-topology-information/vnf-topology-identifier
 | jq .

{
 "vnf-topology-identifier": {
   "vnf-name": "vfirewall-1",
   "generic-vnf-type": "3d6f8762-fea8-453f-bd9a 0",
   "generic-vnf-name": "firewall-vnf",
   "vnf-type": "3d6f8762Fea8453fBd9a..base_vfw..module-0",
   "service-type": "5b5c134c-662e-407e-bc4c-4200b4b5cb02"
 }
}


I am also attaching the firewall.json POST request for vnf toplogy preload to 
sdnc portal.

The VNF profiles have all been added to sdnc portal for the 2 vfs.


But still VF create request (for both vfirewall sink and packetgen) results in 
failure and moves transaction to pending delete.

No api request traces in Openstack as well and neither do I see any volume 
creates for the vnf instance.


Also once in pending delete, I cannot delete the instance as well.


I am also enclosing the firewall POST json request for one of the vfirewall 
vnfs that was preloaded to sdnc

vnf topology information as a POST. (mentioned above)


Attaching it inline to this mail --


{
"input": {
"vnf-topology-information": {
"vnf-topology-identifier": {
"service-type": "5b5c134c-662e-407e-bc4c-4200b4b5cb02",
"vnf-name": "vfirewall-1",
"vnf-type": "3d6f8762Fea8453fBd9a..base_vfw..module-0",
"generic-vnf-name": "firewall-vnf",
"generic-vnf-type": "3d6f8762-fea8-453f-bd9a 0"
},
"vnf-assignments": {
"availability-zones": [],
"vnf-networks": [],
"vnf-vms": []
},
  "vnf-parameters":
  [
{
"vnf-parameter-name": "image_name",
"vnf-parameter-value": "Ubuntu_14.04.5_LTS"
},
{
"vnf-parameter-name": "flavor_name",
"vnf-parameter-value": "m1.small"
},
{
"vnf-parameter-name": "public_net_id",
"vnf-parameter-value": "aff52391-359f-4264-9768-1948c83021bc"
},
{
"vnf-parameter-name": "unprotected_private_net_id",
"vnf-parameter-value": "zdfw1fwl01_unprotected"
},
{
"vnf-parameter-name": "unprotected_private_subnet_id",
"vnf-parameter-value": "zdfw1fwl01_unprotected_sub"
},
{
"vnf-parameter-name": "protected_private_net_id",
"vnf-parameter-value": "zdfw1fwl01_protected"
},
{
"vnf-parameter-name": "protected_private_subnet_id",
"vnf-parameter-value": "zdfw1fwl01_protected_sub"
},
{
"vnf-parameter-name": "onap_private_net_id",
"vnf-parameter-value": "private"
},
{
"vnf-parameter-name": "onap_private_subnet_id",
"vnf-parameter-value": "private"
},
{
"vnf-parameter-name":