When the Queue / topic has  not been created yet because there hasn't been the 
first event for an unathenticated topic you would get this message.

Its a normal message until the first VES event leads to that a message on the 
topic.

Brian


From: [email protected] 
[mailto:[email protected]] On Behalf Of Ramanarayanan, 
Karthick
Sent: Friday, January 19, 2018 11:54 AM
To: Alexis de Talhouët <[email protected]>
Cc: [email protected]
Subject: Re: [onap-discuss] [**EXTERNAL**] Re: Service distribution error on 
latest ONAP/OOM


FWIW, this is the log from policy drools pod. I didn't think it was suspicious 
or related.

But here you go for the topic not found error log that keeps coming every 15 
seconds.

Probably not related to distribution error:


[2018-01-19 
16:50:28,582|WARN|CambriaConsumerImpl|UEB-source-unauthenticated.DCAE_CL_OUTPUT]
 Topic not found: 
/events/unauthenticated.DCAE_CL_OUTPUT/df217580-32bf-4ec5-bdd8-55971c20ad31/0?timeout=15000&limit=100
[2018-01-19 
16:50:43,586|WARN|CambriaConsumerImpl|UEB-source-unauthenticated.DCAE_CL_OUTPUT]
 Topic not found: 
/events/unauthenticated.DCAE_CL_OUTPUT/df217580-32bf-4ec5-bdd8-55971c20ad31/0?timeout=15000&limit=100
[2018-01-19 16:50:43,586|WARN|CambriaConsumerImpl|UEB-source-APPC-LCM-WRITE] 
Topic not found: 
/events/APPC-LCM-WRITE/91345324-2bae-47c0-94cb-cc0bc8229231/0?timeout=15000&limit=100
[2018-01-19 16:50:58,589|WARN|CambriaConsumerImpl|UEB-source-APPC-LCM-WRITE] 
Topic not found: 
/events/APPC-LCM-WRITE/91345324-2bae-47c0-94cb-cc0bc8229231/0?timeout=15000&limit=100
[2018-01-19 
16:50:58,589|WARN|CambriaConsumerImpl|UEB-source-unauthenticated.DCAE_CL_OUTPUT]
 Topic not found: 
/events/unauthenticated.DCAE_CL_OUTPUT/df217580-32bf-4ec5-bdd8-55971c20ad31/0?timeout=15000&limit=100
[2018-01-19 16:51:13,593|WARN|CambriaConsumerImpl|UEB-source-APPC-LCM-WRITE] 
Topic not found: 
/events/APPC-LCM-WRITE/91345324-2bae-47c0-94cb-cc0bc8229231/0?timeout=15000&limit=100
[2018-01-19 
16:51:13,593|WARN|CambriaConsumerImpl|UEB-source-unauthenticated.DCAE_CL_OUTPUT]
 Topic not found: 
/events/unauthenticated.DCAE_CL_OUTPUT/df217580-32bf-4ec5-bdd8-55971c20ad31/0?timeout=15000&limit=100


Regards,

-Karthick

________________________________
From: Ramanarayanan, Karthick
Sent: Friday, January 19, 2018 8:48:23 AM
To: Alexis de Talhouët
Cc: [email protected]<mailto:[email protected]>
Subject: Re: [**EXTERNAL**] Re: [onap-discuss] Service distribution error on 
latest ONAP/OOM


Hi Alexis,

 I did check the policy pod logs before sending the mail.

 I didn't see anything suspicious.

 I initially suspected aai-service dns not getting resolved but you seem to 
have fixed it

 and it was accessible from policy pod.

 Nothing suspicious from any log anywhere.

 I did see that the health check on sdc pods returned all UP except: DE 
component whose health check was down.

 Not sure if its anyway related. Could be benign.


curl http://127.0.0.1:30206/sdc1/rest/healthCheck
{
 "sdcVersion": "1.1.0",
 "siteMode": "unknown",
 "componentsInfo": [
   {
     "healthCheckComponent": "BE",
     "healthCheckStatus": "UP",
     "version": "1.1.0",
     "description": "OK"
   },
   {
     "healthCheckComponent": "TITAN",
     "healthCheckStatus": "UP",
     "description": "OK"
   },
   {
     "healthCheckComponent": "DE",
     "healthCheckStatus": "DOWN",
     "description": "U-EB cluster is not available"
   },
   {
     "healthCheckComponent": "CASSANDRA",
     "healthCheckStatus": "UP",
     "description": "OK"
   },
   {
     "healthCheckComponent": "ON_BOARDING",
     "healthCheckStatus": "UP",
     "version": "1.1.0",
     "description": "OK",
     "componentsInfo": [
       {
         "healthCheckComponent": "ZU",
         "healthCheckStatus": "UP",
         "version": "0.2.0",
         "description": "OK"
       },
       {
         "healthCheckComponent": "BE",
         "healthCheckStatus": "UP",
         "version": "1.1.0",
         "description": "OK"
       },
       {
         "healthCheckComponent": "CAS",
         "healthCheckStatus": "UP",
         "version": "2.1.17",
         "description": "OK"
       },
       {
         "healthCheckComponent": "FE",
         "healthCheckStatus": "UP",
         "version": "1.1.0",
         "description": "OK"
       }
     ]
   },
   {
     "healthCheckComponent": "FE",
     "healthCheckStatus": "UP",
     "version": "1.1.0",
     "description": "OK"
   }
 ]




On some occasions backend doesn't come up even though pods are running.

(seen on other nodes running onap and was there even without your changes. Logs 
indicated nothing.

But if I restart the sdc pods for cassandra, elastic search and kibana before 
backend restart, backend starts responding and ends up creating the user 
profile entries for the various user roles for onap as seen in logs. But this 
is unrelated to this service distribution error as backend is up.)

)





Regards,

-Karthick

________________________________
From: Alexis de Talhouët 
<[email protected]<mailto:[email protected]>>
Sent: Friday, January 19, 2018 4:54 AM
To: Ramanarayanan, Karthick
Cc: [email protected]<mailto:[email protected]>
Subject: [**EXTERNAL**] Re: [onap-discuss] Service distribution error on latest 
ONAP/OOM

Hi,

Could you look at the log of Policy for errors, for that you need to go in the 
pod themselves, under /var/log/onap.
You could do the same for SDC container (backend).
The thing that could have affect Policy is the fact we removed the persisted 
data of mariadb, because it was bogus 
(https://gerrit.onap.org/r/#/c/27521/<https://urldefense.proofpoint.com/v2/url?u=https-3A__gerrit.onap.org_r_-23_c_27521_&d=DwMFaQ&c=06gGS5mmTNpWnXkc0ACHoA&r=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8&m=KvVcZ86B4A84GbAp761jI6ct9mjN5qBKlllOw238TUU&s=AxY6GD58MW0vVFUppssl7HvOycADxAmnLxX8lBIQnHQ&e=>).
 But I doubt it does explain your issue.
Beside that, nothing having a potential disruptive effect happen to policy.
The DCAE work was well tested before it got merged. I'll re-test sometime today 
or early next week to make sure nothing has slept through the crack.

Thanks,
Alexis


On Jan 18, 2018, at 11:44 PM, Ramanarayanan, Karthick 
<[email protected]<mailto:[email protected]>> wrote:

Hi,
 Trying to distribute a demo firewall service instance on a kubernetes host 
running ONAP, I am seeing a new policy exception error on the latest oom on 
amsterdam.
(dcae deploy is false and disableDcae is true)

Error code: POL5000
Status code: 500
Internal Server Error. Please try again later.

All pods are up. Health check seems to be fine on all pods.
k8s pod logs don't seem to reveal anything and this happens consistently 
whenever I try to distribute the service as an operator.

It was working fine last week.
Even yesterday I didn't get this error though I got a different one related 
createVnfInfra notify exception on SO vnf create workflow step but that was a 
different failure than this.

After the dcae config changes got merged, this service distribution error seems 
to have popped up. (dcae is disabled for my setup)

What am I missing?

Thanks,
-Karthick
_______________________________________________
onap-discuss mailing list
[email protected]<mailto:[email protected]>
https://lists.onap.org/mailman/listinfo/onap-discuss<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.onap.org_mailman_listinfo_onap-2Ddiscuss&d=DwMFaQ&c=06gGS5mmTNpWnXkc0ACHoA&r=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8&m=KvVcZ86B4A84GbAp761jI6ct9mjN5qBKlllOw238TUU&s=iSTQDGafujTqHJZN1wt5f_D193fX7bpHPFrDor4tV4I&e=>

_______________________________________________
onap-discuss mailing list
[email protected]
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to