Hi,
I just want to align.

First
docker_version: 1.0-STAGING-latest
gerrit_branch: release-1.0.0

for release one were you successful in running the demo flow?

docker_version: 1.1-STAGING-latest
gerrit_branch: master

for release candidate were you successful in running the demo?

Regarding the error you saw in the kibana please open a defect for us and we 
will review it.

The kibana is an additional functionality that is available in our application 
it does not impact the product functionality concerning the demo flow.

Regarding the vid issue you are seeing are you using the same version of vid 
and sdc (1.1-STAGING-latest for both)?










BR,

Michael Lando
Opensource & Frontend Team Lead, SDC
AT&T Network Application Development * NetCom
Tel Aviv | Tampa | Atlanta | New Jersey |Chicago
***************************************************************************
Office: +972 (3) 5451487
Mobile: +972 (54) 7833603
e-mail: ml6...@intl.att.com<mailto:ml6...@intl.att.com>


From: onap-discuss-boun...@lists.onap.org 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of 
ajay.priyadar...@ril.com
Sent: Wednesday, July 05, 2017 9:52 AM
To: wei.d.c...@intel.com; onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] [SDC] Need your help to deploy SDC

Hi Chen,

I have created two setups with
artifacts_version: 1.1.0-SNAPSHOT
docker_version: 1.1-STAGING-latest / 1.0-STAGING-latest
gerrit_branch: release-1.0.0

With docker version 1.1-STAGING-latest, I found the error you are getting.

Running handlers:
[2017-07-04T08:48:29+00:00] ERROR: Running exception handlers
Running handlers complete
[2017-07-04T08:48:29+00:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 00 seconds
[2017-07-04T08:48:29+00:00] FATAL: Stacktrace dumped to 
/root/chef-solo/cache/chef-stacktrace.out
[2017-07-04T08:48:29+00:00] FATAL: Please provide the contents of the 
stacktrace.out file if you file a bug report
[2017-07-04T08:48:29+00:00] ERROR: 
cookbook_file[/usr/share/elasticsearch/config/kibana_dashboard_virtualization.json]
 (sdc-elasticsearch::ES_6_create_kibana_dashboard_virtualization line 1) had an 
error: Chef::Exceptions::FileNotFound: Cookbook 'sdc-elasticsearch' (0.0.0) 
does not contain a file at any of these locations:
  files/debian-8.6/kibana_dashboard_virtualization.json
  files/debian/kibana_dashboard_virtualization.json
  files/default/kibana_dashboard_virtualization.json
  files/kibana_dashboard_virtualization.json

This cookbook _does_ contain: 
['files/default/dashboard_Monitoring-Dashboared.json','files/default/dashboard_BI-Dashboard.json','files/default/visualization_JVM-used-Threads-Num.json','files/default/logging.yml','files/default/visualization_Show-all-distributed-services.json','files/default/visualization_host-used-CPU.json','files/default/visualization_JVM-used-CPU.json','files/default/visualization_host-used-Threads-Num.json','files/default/visualization_number-of-user-accesses.json','files/default/visualization_Show-all-created-Resources-slash-Services-slash-Products.json','files/default/visualization_JVM-used-Memory.json','files/default/visualization_Show-all-certified-services-ampersand-resources-(per-day).json']
[2017-07-04T08:48:29+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef 
run process exited unsuccessfully (exit code 1)
[2017-07-04 09:38:50,215][INFO ][cluster.metadata         ] [a196cd3a4697] 
[auditingevents-2017-07] update_mapping [distributionengineevent]
[2017-07-05 05:25:46,814][DEBUG][action.index             ] [a196cd3a4697] 
[auditingevents-2017-07][0], node[hJtkhcP5S3yBoAL9_NqmAA], [P], v[4], 
s[STARTED], a[id=RZ9636CoT9qtFcMqRa0rvQ]: Failed to execute [index 
{[auditingevents-2017-07][externalapievent][AV0RNiK5ZsU_ZS6SiObx], 
source[{"MODIFIER":"","STATUS":"404","ACTION":"GetFilteredAssetList","TIMESTAMP":"2017-07-05
 05:25:46.809 
UTC","RESOURCE_URL":"\/sdc\/v1\/catalog\/services?distributionStatus=DISTRIBUTED","REQUEST_ID":"80c4e3e2-601e-431e-a710-dff9cd83325c","CONSUMER_ID":"VID","DESC":"SVC4642:
 No services were found to match criteria distributionStatus=DISTRIBUTED"}]}]
MapperParsingException[failed to parse [TIMESTAMP]]; nested: 
IllegalArgumentException[Invalid format: "2017-07-05 05:25:46.809 UTC" is 
malformed at " 05:25:46.809 UTC"];
        at 
org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:339)
        at 
org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:314)
        at 
org.elasticsearch.index.mapper.DocumentParser.parseAndMergeUpdate(DocumentParser.java:762)
        at 
org.elasticsearch.index.mapper.DocumentParser.parseDynamicValue(DocumentParser.java:676)
        at 
org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:447)
        at 
org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:267)
        at 
org.elasticsearch.index.mapper.DocumentParser.innerParseDocument(DocumentParser.java:127)
        at 
org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:79)
        at 
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:304)
        at 
org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:517)
        at 
org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:508)
        at 
org.elasticsearch.action.support.replication.TransportReplicationAction.prepareIndexOperationOnPrimary(TransportReplicationAction.java:1053)
        at 
org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1061)
        at 
org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
        at 
org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
        at 
org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
        at 
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Invalid format: "2017-07-05 
05:25:46.809 UTC" is malformed at " 05:25:46.809 UTC"
        at 
org.joda.time.format.DateTimeParserBucket.doParseMillis(DateTimeParserBucket.java:187)
        at 
org.joda.time.format.DateTimeFormatter.parseMillis(DateTimeFormatter.java:780)
        at 
org.elasticsearch.index.mapper.core.DateFieldMapper$DateFieldType.parseStringValue(DateFieldMapper.java:360)
        at 
org.elasticsearch.index.mapper.core.DateFieldMapper.innerParseCreateField(DateFieldMapper.java:526)
        at 
org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:213)
        at 
org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:331)

But my healthcheck is absolutely fine

FE health-Check:
{
  "sdcVersion": "1.1.0-SNAPSHOT",
  "siteMode": "unknown",
  "componentsInfo": [
    {
      "healthCheckComponent": "BE",
      "healthCheckStatus": "UP",
      "version": "1.1.0-SNAPSHOT",
      "description": "OK"
    },
    {
      "healthCheckComponent": "ES",
      "healthCheckStatus": "UP",
      "description": "OK"
    },
    {
      "healthCheckComponent": "TITAN",
      "healthCheckStatus": "UP",
      "description": "OK"
    },
    {
      "healthCheckComponent": "DE",
      "healthCheckStatus": "UP",
      "description": "OK"
    },
    {
      "healthCheckComponent": "FE",
      "healthCheckStatus": "UP",
      "version": "1.1.0-SNAPSHOT",
      "description": "OK"
    }
  ]
}

check user existance: OK

But my vid is unable to fetch models from sdc.
[cid:image001.png@01D2F59F.426053A0]

I have issues in aai connectivity as well. So I changed docker_version to 
1.0-STAGING-latest
root@vm1-sdc:/opt# cat config/artifacts_version.txt config/docker_version.txt 
config/gerrit_branch.txt
1.1.0-SNAPSHOT
1.0-STAGING-latest
release-1.0.0

now all working fine, so you can reinstall sdc by using above configuration.

Regards,
Ajay

From: Chen, Wei D [mailto:wei.d.c...@intel.com]
Sent: 05 July 2017 11:45
To: Ajay Priyadarshi; 
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: RE: [onap-discuss] [SDC] Need your help to deploy SDC

Hi Ajay,

Thanks for your feedback!
I think that is not the root cause in my side, I just need wait more time so 
that backend can connect with Cassandra, but SDC still cannot work, health 
checkup scripts always show me Titan graph is down, I have reinstalled ONAP and 
change the environment as below (as mentioned in another thread):
  *     artifacts_version:          from 1.0.0                                  
     to  1.1.0-SNAPSHOT
  *     docker_version:            from 1.0-STAGING-latest             to latest

Still doesn't work.

Ajay, how you ever enabled SDC? What's the artefact version and the docker 
version are you using?

sdc-es seems to fail as well,
$ sudo docker logs sdc-es
...
d: Cookbook 'sdc-elasticsearch' (0.0.0) does not contain a file at any of these 
locations:
  files/debian-8.6/kibana_dashboard_virtualization.json
  files/debian/kibana_dashboard_virtualization.json
  files/default/kibana_dashboard_virtualization.json
  files/kibana_dashboard_virtualization.json

This cookbook _does_ contain: 
['files/default/dashboard_Monitoring-Dashboared.json','files/default/logging.yml','files/default/visualization_JVM-used-CPU.json','files/default/visualization_JVM-used-Memory.json','files/default/visualization_JVM-used-Threads-Num.json','files/default/visualization_Show-all-certified-services-ampersand-resources-(per-day).json','files/default/visualization_Show-all-created-Resources-slash-Services-slash-Products.json','files/default/visualization_Show-all-distributed-services.json','files/default/visualization_host-used-CPU.json','files/default/visualization_host-used-Threads-Num.json','files/default/visualization_number-of-user-accesses.json','files/default/dashboard_BI-Dashboard.json']
[2017-07-04T21:38:40+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef 
run process exited unsuccessfully (exit code 1)


From: ajay.priyadar...@ril.com<mailto:ajay.priyadar...@ril.com> 
[mailto:ajay.priyadar...@ril.com]
Sent: Wednesday, July 5, 2017 1:52 PM
To: Chen, Wei D <wei.d.c...@intel.com<mailto:wei.d.c...@intel.com>>; 
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: RE: [onap-discuss] [SDC] Need your help to deploy SDC


Hi Dave chen,

I observed this issue as using openstack_float, and found that CS and ES were 
not able to connect. Although this time all routing is working fine.
                Errors:

1.       stderrout.log
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: /10.64.105.82:9042 
(com.datastax.driver.core.TransportException: [/10.64.105.82:9042] Cannot 
connect))

2.       /data/logs/BE/ASDC/ASDC-BE/error.log

2017-06-23T06:12:11.828Z|||ES-Health-Check-Thread||ASDC-BE||ERROR|||192.168.3.1||o.o.sdc.be.dao.impl.ESCatalogDAO||ActivityType=<?>,
 Desc=<Error while trying to connect to elasticsearch. host: 
[10.64.105.82:9300] port: 9200>

Resolution:
File /opt/config/public_ip.txt uses sdc_floating_ip (public ip). Even (CS is up 
on -host-ip 0.0.0.0 -host-port 9042, ES is on -host-ip 0.0.0.0 -host-port 
9200). I changed the /opt/config/public_ip.txt value to private_ip .


You also have same error
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
host(s) tried for query failed (tried: /192.168.4.116:9042 
(com.datastax.driver.core.TransportException: [/192.168.4.116:9042] Cannot 
connect))

My cassandra was also showing all required namespaces, but unable to serve. So 
I observed and resolved as above. Hopefully it help you.

Regards,
Ajay Priyadarshi





Date: Sat, 1 Jul 2017 07:59:45 +0000

From: "Chen, Wei D" <wei.d.c...@intel.com<mailto:wei.d.c...@intel.com>>

To: "onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>" 
<onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>>

Subject: [onap-discuss]  [SDC] Need your help to deploy SDC

Message-ID:

                
<c5a0092c63e939488005f15f736a81125b2c2...@shsmsx104.ccr.corp.intel.com<mailto:c5a0092c63e939488005f15f736a81125b2c2...@shsmsx104.ccr.corp.intel.com>>



Content-Type: text/plain; charset="us-ascii"



Hi All,



I am trying to enable SDC in the demo project, the ONAP is on top of vanilla 
OpenStack, just want to have a try on how it works from portal, but I got 
bunches of issues and the latest one is from Cassandra, seen from the log it 
seems like 9042 port is not available (log is attached), so I go to the 
container and run /root/startup.sh manually, firstly, I can access the DB, but 
some mins later I cannot access the DB anymore,



This is what I get from the DB couple of mins after run startup.sh.



# cqlsh -u cassandra -p Aa1234%^! 10.0.3.1







Namespaces:



cassandra@cqlsh> DESCRIBE keyspaces;







system_auth   titan   sdcaudit  sdcartifact



sdccomponent  system  dox       system_traces





I check both 9160 and 9042 inside container, seems like they are opened,

root@5aad37f3ec48:/var/log/cassandra# netstat -anp | grep 9042

tcp6       0      0 0.0.0.0:9042            :::*                    LISTEN      
-

root@5aad37f3ec48:/var/log/cassandra# netstat -anp | grep 9160

tcp        0      0 0.0.0.0:9160            0.0.0.0:*               LISTEN      
-





But when I try to connect to the DB again, I see,

root@5aad37f3ec48:/var/log/cassandra# cqlsh -u cassandra -p Aa1234%^! 10.0.3.1

Connection error: ('Unable to connect to any servers', {'10.0.3.1': 
OperationTimedOut('errors=None, last_host=None',)})



Also, I saw those message from console,

Fri Jun 30 01:39:52 UTC 2017 --- cqlsh is NOT enabled to connect yet. sleep 5



Looks like those info comes from /tmp/create_cassandra_user.sh.



Any ideas on how to fix it?



Best Regards,

Dave Chen


"Confidentiality Warning: This message and any attachments are intended only 
for the use of the intended recipient(s), are confidential and may be 
privileged. If you are not the intended recipient, you are hereby notified that 
any review, re-transmission, conversion to hard copy, copying, circulation or 
other use of this message and any attachments is strictly prohibited. If you 
are not the intended recipient, please notify the sender immediately by return 
email and delete this message and any attachments from your system.

Virus Warning: Although the company has taken reasonable precautions to ensure 
no viruses are present in this email. The company cannot accept responsibility 
for any loss or damage arising from the use of this email or attachment."

"Confidentiality Warning: This message and any attachments are intended only 
for the use of the intended recipient(s), are confidential and may be 
privileged. If you are not the intended recipient, you are hereby notified that 
any review, re-transmission, conversion to hard copy, copying, circulation or 
other use of this message and any attachments is strictly prohibited. If you 
are not the intended recipient, please notify the sender immediately by return 
email and delete this message and any attachments from your system.

Virus Warning: Although the company has taken reasonable precautions to ensure 
no viruses are present in this email. The company cannot accept responsibility 
for any loss or damage arising from the use of this email or attachment."
_______________________________________________
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to