Dave,
the logs look familiar to me... if I would remember what I did to 
mitigate.
I can only help with some vague memory as I seem not to have documented 
this piece properly :-(

I think the root issue is this line
Caused by: java.net.UnknownHostException: 
portal.api.simpledemo.openecomp.org from 1610-1

which is caused by a strange /etc/resolve.conf in the 1610-1 docker 
container. There are some rules how this file is created, in summary, it 
helped us to remove name servers on private networks (192.169...)  and 
just use 8.8.8.8 (and of course 10.0.100.1)

you could do a few things (and I attached the output of my portal which is 
working)

root@vm1-portal:~# docker exec -ot 1610-1 ping 
portal.api.simpledemo.openecomp.org
# this should help you to decide whether host resolution works
#
-------------
root@vm1-portal:~# docker exec -ot 1610-1 cat /etc/resolv.conf
search openstacklocal
nameserver 10.0.100.1
nameserver 8.8.8.8



-------------
root@vm1-portal:~#  ps www -C dockerd
  PID TTY      STAT   TIME COMMAND
17827 ?        Ssl   37:43 /usr/bin/dockerd --dns 10.0.100.1 --dns 8.8.8.8 
--mtu=1450 --raw-logs



-------------
root@vm1-portal:~# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by 
resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.0.100.1
nameserver 8.8.8.8
search openstacklocal



Mit freundlichen Grüßen / Kind regards 
Josef Reisinger 



From:   "Chen, Wei D" <wei.d.c...@intel.com>
To:     "ajay.priyadar...@ril.com" <ajay.priyadar...@ril.com>, 
"onap-discuss@lists.onap.org" <onap-discuss@lists.onap.org>
Date:   06.07.2017 11:07
Subject:        Re: [onap-discuss] [SDC] [portal] [demo] Need your help to 
deploy SDC
Sent by:        onap-discuss-boun...@lists.onap.org



Thank you Ajay!
 
I haven?t customized anything but is using the yaml from the release1.0.0 
branch. ONAP is setup on top of stable/ocata branch.
After restarting each container for couple of times, the health status 
finally show me that all of them are up.
 
}ubuntu@vm1-sdc:/data/scripts$ curl 
http://localhost:8181/sdc1/rest/healthCheck
{
  "sdcVersion": "1.0.0",
  "siteMode": "unknown",
  "componentsInfo": [
    {
      "healthCheckComponent": "BE",
      "healthCheckStatus": "UP",
      "version": "1.0.0",
      "description": "OK"
    },
    {
      "healthCheckComponent": "ES",
      "healthCheckStatus": "UP",
      "description": "OK"
    },
    {
      "healthCheckComponent": "TITAN",
      "healthCheckStatus": "UP",
      "description": "OK"
    },
    {
      "healthCheckComponent": "DE",
      "healthCheckStatus": "UP",
      "description": "OK"
    },
    {
      "healthCheckComponent": "FE",
      "healthCheckStatus": "UP",
      "version": "1.0.0",
      "description": "OK"
    }
  ]
 
 
But I cannot login to the portal this time. L
$ sudo docker logs 1610-1
?
Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A 
ResourcePool could not acquire a resource from its primary factory or 
source.
        at 
com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1469)
        at 
com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:644)
        at 
com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:554)
        at 
com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutAndMarkConnectionInUse(C3P0PooledConnectionPool.java:758)
        at 
com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:685)
        ... 44 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: 
Communications link failure
 
The last packet sent successfully to the server was 0 milliseconds ago. 
The driver has not received any packets from the server.
        at sun.reflect.GeneratedConstructorAccessor116.newInstance(Unknown 
Source)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
        at 
com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1117)
        at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:350)
        at 
com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2408)
        at 
com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2445)
        at 
com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2230)
        at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:813)
        at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
        at sun.reflect.GeneratedConstructorAccessor113.newInstance(Unknown 
Source)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
        at 
com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:399)
        at 
com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:334)
        at 
com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175)
        at 
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220)
        at 
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206)
        at 
com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203)
        at 
com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1138)
        at 
com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1125)
        at 
com.mchange.v2.resourcepool.BasicResourcePool.access$700(BasicResourcePool.java:44)
        at 
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1870)
        at 
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:696)
Caused by: java.net.UnknownHostException: 
portal.api.simpledemo.openecomp.org
        at java.net.InetAddress.getAllByName0(InetAddress.java:1280)
        at java.net.InetAddress.getAllByName(InetAddress.java:1192)
        at java.net.InetAddress.getAllByName(InetAddress.java:1126)
        at 
com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:249)
        at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:300)
        ... 20 more
 
 
Status of the containers from portal,
$ sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED   
       STATUS              PORTS                          NAMES
bf58b281a5d2        ep:1610-1           "/PROJECT/OpenSour..."   43 
minutes ago      Up 42 minutes       0.0.0.0:8006->8005/tcp, 
0.0.0.0:8010->8009/tcp, 0.0.0.0:8989->8080/tcp   1610-1
ab09661949f7        ecompdb:portal      "docker-entrypoint..."   43 
minutes ago      Up 43 minutes  ecompdb_portal
3e24dd0af882        mariadb             "docker-entrypoint..."   43 
minutes ago      Created  data_vol_portal
 
Any ideas?
 
Regards,
Dave Chen
 
From: ajay.priyadar...@ril.com [mailto:ajay.priyadar...@ril.com] 
Sent: Thursday, July 6, 2017 3:14 PM
To: Chen, Wei D <wei.d.c...@intel.com>; onap-discuss@lists.onap.org
Subject: RE: [onap-discuss] [SDC] [demo] Need your help to deploy SDC
 
Hi Chen,
 
You are using customized  version, as of new design now aai have been 
split in 2 vms (one has the docker containers that run the A&AI logic and 
one has databases and third-party software dependencies.). Your yaml 
didn?t support it. Your yaml is doing installation job as well (install 
script is pasted in yaml itself). 
Have you customized it as per environment. If not you should use modular 
approach for installation. Heat will copy the all required configuration 
from underlined openstack and download the install scripts for respective 
vm. This is more helpful to correct as some issued we observed in install 
scripts (eg. Ubuntu 16.04 ens3 renaming etc.)
 
You can always find latest ones from 
https://gerrit.onap.org/r/gitweb?p=demo.git;a=tree;f=heat/OpenECOMP;h=b191dadd58edbe261c860adb649acdfd8e7c8b0f;hb=HEAD
Please let me know your openstack/cloud environment details.
 
For my environment, I need to create changes for DCAE (floating ips), so I 
changed for same. I am not sure it working or not but I am able to create 
service using portal. 
My environment : RHOSP -10 (openstack Newton)
 
 
Regards,
Ajay
 
From: Chen, Wei D [mailto:wei.d.c...@intel.com] 
Sent: 06 July 2017 12:11
To: Ajay Priyadarshi; onap-discuss@lists.onap.org
Subject: RE: [onap-discuss] [SDC] [demo] Need your help to deploy SDC
 
Hi Ajay,
 
This copy is from release 1.0.0, it haven?t created those file, I saw the 
file got created with the yaml from the latest code branch.
 
 
From: ajay.priyadar...@ril.com [mailto:ajay.priyadar...@ril.com] 
Sent: Thursday, July 6, 2017 2:29 PM
To: Chen, Wei D <wei.d.c...@intel.com>; onap-discuss@lists.onap.org
Subject: RE: [onap-discuss] [SDC] [demo] Need your help to deploy SDC
 
Can you share your yaml file (which you used for stack creation). 
Because yaml creates these files (/opt/config/artifacts_version.txt and 
/opt/config/gerrit_branch.txt) in your vm.
 
 
Regards,
Ajay 
 
From: Chen, Wei D [mailto:wei.d.c...@intel.com] 
Sent: 06 July 2017 11:54
To: Ajay Priyadarshi; onap-discuss@lists.onap.org
Subject: RE: [onap-discuss] [SDC] [demo] Need your help to deploy SDC
 
Hi Ajay, 
 
Still no luck to get it up, I revert those environment as you mentioned 
below, I found the logs in backend say it cannot connect with Cassandra,
 
?
Caused by:
com.datastax.driver.core.exceptions.AuthenticationException: 
Authentication error on host /192.168.4.107:9042: Username and/or password 
are incorrect
        at 
com.datastax.driver.core.Connection$8.apply(Connection.java:378)
        at 
com.datastax.driver.core.Connection$8.apply(Connection.java:348)
        at 
com.google.common.util.concurrent.Futures$AsyncChainingFuture.doTransform(Futures.java:1442)
        at 
com.google.common.util.concurrent.Futures$AsyncChainingFuture.doTransform(Futures.java:1433)
        at 
com.google.common.util.concurrent.Futures$AbstractChainingFuture.run(Futures.java:1408)
        at 
com.google.common.util.concurrent.Futures$2$1.run(Futures.java:1177)
        at 
com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:310)
        at 
com.google.common.util.concurrent.Futures$2.execute(Futures.java:1174)
        at 
com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:817)
        at 
com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:753)
        at 
com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:613)
        at 
com.datastax.driver.core.Connection$Future.onSet(Connection.java:1179)
 
 
So, I login to the container and try it manually, cannot connect with the 
DB either,
# cqlsh -u cassandra -p cassandra 10.0.3.1
Connection error: ('Unable to connect to any servers', {'10.0.3.1': 
AuthenticationFailed(u'Failed to authenticate to 10.0.3.1: code=0100 [Bad 
credentials] message="Username and/or password are incorrect"',)})
 
I guess some chef script haven?t got executed correctly, so I am trying to 
run startup.sh inside the container directly, and see if the issue will go 
away.
root@ad0d853bad30:/# ./root/startup.sh
 
[2017-07-05T21:49:54+00:00] INFO: Started chef-zero at 
chefzero://localhost:8889 with repository at /root/chef-solo
  One version per cookbook
 
[2017-07-05T21:49:54+00:00] INFO: Forking chef instance to converge...
Starting Chef Client, version 12.19.36
[2017-07-05T21:49:56+00:00] INFO: *** Chef 12.19.36 ***
[2017-07-05T21:49:56+00:00] INFO: Platform: x86_64-linux
[2017-07-05T21:49:56+00:00] INFO: Chef-client pid: 29039
[2017-07-05T21:50:56+00:00] INFO: Setting the run_list to 
["role[cassandra-actions]"] from CLI options
[2017-07-05T21:50:57+00:00] WARN: Run List override has been provided.
[2017-07-05T21:50:57+00:00] WARN: Original Run List: 
[role[cassandra-actions]]
[2017-07-05T21:50:57+00:00] WARN: Overridden Run List: 
[recipe[cassandra-actions::01-configureCassandra]]
[2017-07-05T21:50:57+00:00] INFO: Run List is 
[recipe[cassandra-actions::01-configureCassandra]]
[2017-07-05T21:50:57+00:00] INFO: Run List expands to 
[cassandra-actions::01-configureCassandra]
[2017-07-05T21:50:57+00:00] INFO: Starting Chef Run for ad0d853bad30
[2017-07-05T21:50:57+00:00] INFO: Running start handlers
[2017-07-05T21:50:57+00:00] INFO: Start handlers complete.
[2017-07-05T21:50:57+00:00] INFO: HTTP Request Returned 404 Not Found: 
Object not found:
resolving cookbooks for run list: 
["cassandra-actions::01-configureCassandra"]
[2017-07-05T21:53:38+00:00] INFO: Loading cookbooks 
[cassandra-actions@0.0.0]
[2017-07-05T21:53:38+00:00] INFO: Skipping removal of obsoleted cookbooks 
from the cache
Synchronizing Cookbooks:
?
 
There is a 404 message, I am not sure if that matter, the process is so 
slowly, let me see what happened then.
 
 
BTW, I cannot find the file /opt/config/artifacts_version.txt and 
/opt/config/gerrit_branch.txt inside SDC VM, those files exist when I am 
using the latest code but not 1.0.0. release, have you ever managed to 
work around something? 
 
Regards,
Dave Chen
 
From: ajay.priyadar...@ril.com [mailto:ajay.priyadar...@ril.com] 
Sent: Wednesday, July 5, 2017 2:52 PM
To: Chen, Wei D <wei.d.c...@intel.com>; onap-discuss@lists.onap.org
Subject: RE: [onap-discuss] [SDC] Need your help to deploy SDC
 
Hi Chen,
 
I have created two setups with 
artifacts_version: 1.1.0-SNAPSHOT
docker_version: 1.1-STAGING-latest / 1.0-STAGING-latest
gerrit_branch: release-1.0.0
 
With docker version 1.1-STAGING-latest, I found the error you are getting.
 
Running handlers:
[2017-07-04T08:48:29+00:00] ERROR: Running exception handlers
Running handlers complete
[2017-07-04T08:48:29+00:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 00 seconds
[2017-07-04T08:48:29+00:00] FATAL: Stacktrace dumped to 
/root/chef-solo/cache/chef-stacktrace.out
[2017-07-04T08:48:29+00:00] FATAL: Please provide the contents of the 
stacktrace.out file if you file a bug report
[2017-07-04T08:48:29+00:00] ERROR: 
cookbook_file[/usr/share/elasticsearch/config/kibana_dashboard_virtualization.json]
 
(sdc-elasticsearch::ES_6_create_kibana_dashboard_virtualization line 1) 
had an error: Chef::Exceptions::FileNotFound: Cookbook 'sdc-elasticsearch' 
(0.0.0) does not contain a file at any of these locations:
  files/debian-8.6/kibana_dashboard_virtualization.json
  files/debian/kibana_dashboard_virtualization.json
  files/default/kibana_dashboard_virtualization.json
  files/kibana_dashboard_virtualization.json
 
This cookbook _does_ contain: 
['files/default/dashboard_Monitoring-Dashboared.json','files/default/dashboard_BI-Dashboard.json','files/default/visualization_JVM-used-Threads-Num.json','files/default/logging.yml','files/default/visualization_Show-all-distributed-services.json','files/default/visualization_host-used-CPU.json','files/default/visualization_JVM-used-CPU.json','files/default/visualization_host-used-Threads-Num.json','files/default/visualization_number-of-user-accesses.json','files/default/visualization_Show-all-created-Resources-slash-Services-slash-Products.json','files/default/visualization_JVM-used-Memory.json','files/default/visualization_Show-all-certified-services-ampersand-resources-(per-day).json']
[2017-07-04T08:48:29+00:00] FATAL: Chef::Exceptions::ChildConvergeError: 
Chef run process exited unsuccessfully (exit code 1)
[2017-07-04 09:38:50,215][INFO ][cluster.metadata         ] [a196cd3a4697] 
[auditingevents-2017-07] update_mapping [distributionengineevent]
[2017-07-05 05:25:46,814][DEBUG][action.index             ] [a196cd3a4697] 
[auditingevents-2017-07][0], node[hJtkhcP5S3yBoAL9_NqmAA], [P], v[4], 
s[STARTED], a[id=RZ9636CoT9qtFcMqRa0rvQ]: Failed to execute [index 
{[auditingevents-2017-07][externalapievent][AV0RNiK5ZsU_ZS6SiObx], 
source[{"MODIFIER":"","STATUS":"404","ACTION":"GetFilteredAssetList","TIMESTAMP":"2017-07-05
 
05:25:46.809 
UTC","RESOURCE_URL":"\/sdc\/v1\/catalog\/services?distributionStatus=DISTRIBUTED","REQUEST_ID":"80c4e3e2-601e-431e-a710-dff9cd83325c","CONSUMER_ID":"VID","DESC":"SVC4642:
 
No services were found to match criteria 
distributionStatus=DISTRIBUTED"}]}]
MapperParsingException[failed to parse [TIMESTAMP]]; nested: 
IllegalArgumentException[Invalid format: "2017-07-05 05:25:46.809 UTC" is 
malformed at " 05:25:46.809 UTC"];
        at 
org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:339)
        at 
org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:314)
        at 
org.elasticsearch.index.mapper.DocumentParser.parseAndMergeUpdate(DocumentParser.java:762)
        at 
org.elasticsearch.index.mapper.DocumentParser.parseDynamicValue(DocumentParser.java:676)
        at 
org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:447)
        at 
org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:267)
        at 
org.elasticsearch.index.mapper.DocumentParser.innerParseDocument(DocumentParser.java:127)
        at 
org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:79)
        at 
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:304)
        at 
org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:517)
        at 
org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:508)
        at 
org.elasticsearch.action.support.replication.TransportReplicationAction.prepareIndexOperationOnPrimary(TransportReplicationAction.java:1053)
        at 
org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1061)
        at 
org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
        at 
org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
        at 
org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
        at 
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Invalid format: "2017-07-05 
05:25:46.809 UTC" is malformed at " 05:25:46.809 UTC"
        at 
org.joda.time.format.DateTimeParserBucket.doParseMillis(DateTimeParserBucket.java:187)
        at 
org.joda.time.format.DateTimeFormatter.parseMillis(DateTimeFormatter.java:780)
        at 
org.elasticsearch.index.mapper.core.DateFieldMapper$DateFieldType.parseStringValue(DateFieldMapper.java:360)
        at 
org.elasticsearch.index.mapper.core.DateFieldMapper.innerParseCreateField(DateFieldMapper.java:526)
        at 
org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:213)
        at 
org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:331)
 
But my healthcheck is absolutely fine 
 
FE health-Check:
{
  "sdcVersion": "1.1.0-SNAPSHOT",
  "siteMode": "unknown",
  "componentsInfo": [
    {
      "healthCheckComponent": "BE",
      "healthCheckStatus": "UP",
      "version": "1.1.0-SNAPSHOT",
      "description": "OK"
    },
    {
      "healthCheckComponent": "ES",
      "healthCheckStatus": "UP",
      "description": "OK"
    },
    {
      "healthCheckComponent": "TITAN",
      "healthCheckStatus": "UP",
      "description": "OK"
    },
    {
      "healthCheckComponent": "DE",
      "healthCheckStatus": "UP",
      "description": "OK"
    },
    {
      "healthCheckComponent": "FE",
      "healthCheckStatus": "UP",
      "version": "1.1.0-SNAPSHOT",
      "description": "OK"
    }
  ]
}
 
check user existance: OK
 
But my vid is unable to fetch models from sdc.

 
I have issues in aai connectivity as well. So I changed docker_version to 
1.0-STAGING-latest
root@vm1-sdc:/opt# cat config/artifacts_version.txt 
config/docker_version.txt config/gerrit_branch.txt
1.1.0-SNAPSHOT
1.0-STAGING-latest
release-1.0.0
 
now all working fine, so you can reinstall sdc by using above 
configuration.
 
Regards,
Ajay 
 
From: Chen, Wei D [mailto:wei.d.c...@intel.com] 
Sent: 05 July 2017 11:45
To: Ajay Priyadarshi; onap-discuss@lists.onap.org
Subject: RE: [onap-discuss] [SDC] Need your help to deploy SDC
 
Hi Ajay,
 
Thanks for your feedback!
I think that is not the root cause in my side, I just need wait more time 
so that backend can connect with Cassandra, but SDC still cannot work, 
health checkup scripts always show me Titan graph is down, I have 
reinstalled ONAP and change the environment as below (as mentioned in 
another thread):
  *     artifacts_version:          from 1.0.0           to 1.1.0-SNAPSHOT
  *     docker_version:            from 1.0-STAGING-latest             to 
latest
 
Still doesn?t work. 
 
Ajay, how you ever enabled SDC? What?s the artefact version and the docker 
version are you using?
 
sdc-es seems to fail as well, 
$ sudo docker logs sdc-es
?
d: Cookbook 'sdc-elasticsearch' (0.0.0) does not contain a file at any of 
these locations:
  files/debian-8.6/kibana_dashboard_virtualization.json
  files/debian/kibana_dashboard_virtualization.json
  files/default/kibana_dashboard_virtualization.json
  files/kibana_dashboard_virtualization.json
 
This cookbook _does_ contain: 
['files/default/dashboard_Monitoring-Dashboared.json','files/default/logging.yml','files/default/visualization_JVM-used-CPU.json','files/default/visualization_JVM-used-Memory.json','files/default/visualization_JVM-used-Threads-Num.json','files/default/visualization_Show-all-certified-services-ampersand-resources-(per-day).json','files/default/visualization_Show-all-created-Resources-slash-Services-slash-Products.json','files/default/visualization_Show-all-distributed-services.json','files/default/visualization_host-used-CPU.json','files/default/visualization_host-used-Threads-Num.json','files/default/visualization_number-of-user-accesses.json','files/default/dashboard_BI-Dashboard.json']
[2017-07-04T21:38:40+00:00] FATAL: Chef::Exceptions::ChildConvergeError: 
Chef run process exited unsuccessfully (exit code 1)
 
 
From: ajay.priyadar...@ril.com [mailto:ajay.priyadar...@ril.com] 
Sent: Wednesday, July 5, 2017 1:52 PM
To: Chen, Wei D <wei.d.c...@intel.com>; onap-discuss@lists.onap.org
Subject: RE: [onap-discuss] [SDC] Need your help to deploy SDC
 
Hi Dave chen,
 
I observed this issue as using openstack_float, and found that CS and ES 
were not able to connect. Although this time all routing is working fine.  

                Errors:
1.      stderrout.log
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
tried for query failed (tried: /10.64.105.82:9042 
(com.datastax.driver.core.TransportException: [/10.64.105.82:9042] Cannot 
connect))
2.      /data/logs/BE/ASDC/ASDC-BE/error.log 
2017-06-23T06:12:11.828Z|||ES-Health-Check-Thread||ASDC-BE||ERROR|||192.168.3.1||o.o.sdc.be.dao.impl.ESCatalogDAO||ActivityType=<?>,
 
Desc=<Error while trying to connect to elasticsearch. host: 
[10.64.105.82:9300] port: 9200>
 
Resolution:
File /opt/config/public_ip.txt uses sdc_floating_ip (public ip). Even (CS 
is up on -host-ip 0.0.0.0 -host-port 9042, ES is on -host-ip 0.0.0.0 
-host-port 9200). I changed the /opt/config/public_ip.txt value to 
private_ip .
 
You also have same error
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: 
All host(s) tried for query failed (tried: /192.168.4.116:9042 
(com.datastax.driver.core.TransportException: [/192.168.4.116:9042] Cannot 
connect))
 
My cassandra was also showing all required namespaces, but unable to 
serve. So I observed and resolved as above. Hopefully it help you.
 
Regards,
Ajay Priyadarshi
 
 
Date: Sat, 1 Jul 2017 07:59:45 +0000
From: "Chen, Wei D" <wei.d.c...@intel.com>
To: "onap-discuss@lists.onap.org" <onap-discuss@lists.onap.org>
Subject: [onap-discuss]  [SDC] Need your help to deploy SDC
Message-ID:
                <
c5a0092c63e939488005f15f736a81125b2c2...@shsmsx104.ccr.corp.intel.com>
 
Content-Type: text/plain; charset="us-ascii"
 
Hi All,
 
I am trying to enable SDC in the demo project, the ONAP is on top of 
vanilla OpenStack, just want to have a try on how it works from portal, 
but I got bunches of issues and the latest one is from Cassandra, seen 
from the log it seems like 9042 port is not available (log is attached), 
so I go to the container and run /root/startup.sh manually, firstly, I can 
access the DB, but some mins later I cannot access the DB anymore,
 
This is what I get from the DB couple of mins after run startup.sh.
 
# cqlsh -u cassandra -p Aa1234%^! 10.0.3.1
 
 
 
Namespaces:
 
cassandra@cqlsh> DESCRIBE keyspaces;
 
 
 
system_auth   titan   sdcaudit  sdcartifact
 
sdccomponent  system  dox       system_traces
 
 
I check both 9160 and 9042 inside container, seems like they are opened,
root@5aad37f3ec48:/var/log/cassandra# netstat -anp | grep 9042
tcp6       0      0 0.0.0.0:9042            :::*                    LISTEN 
     -
root@5aad37f3ec48:/var/log/cassandra# netstat -anp | grep 9160
tcp        0      0 0.0.0.0:9160            0.0.0.0:*               LISTEN 
     -
 
 
But when I try to connect to the DB again, I see,
root@5aad37f3ec48:/var/log/cassandra# cqlsh -u cassandra -p Aa1234%^! 
10.0.3.1
Connection error: ('Unable to connect to any servers', {'10.0.3.1': 
OperationTimedOut('errors=None, last_host=None',)})
 
Also, I saw those message from console,
Fri Jun 30 01:39:52 UTC 2017 --- cqlsh is NOT enabled to connect yet. 
sleep 5
 
Looks like those info comes from /tmp/create_cassandra_user.sh.
 
Any ideas on how to fix it?
 
Best Regards,
Dave Chen
 

"Confidentiality Warning: This message and any attachments are intended 
only for the use of the intended recipient(s), are confidential and may be 
privileged. If you are not the intended recipient, you are hereby notified 
that any review, re-transmission, conversion to hard copy, copying, 
circulation or other use of this message and any attachments is strictly 
prohibited. If you are not the intended recipient, please notify the 
sender immediately by return email and delete this message and any 
attachments from your system.
Virus Warning: Although the company has taken reasonable precautions to 
ensure no viruses are present in this email. The company cannot accept 
responsibility for any loss or damage arising from the use of this email 
or attachment."

"Confidentiality Warning: This message and any attachments are intended 
only for the use of the intended recipient(s), are confidential and may be 
privileged. If you are not the intended recipient, you are hereby notified 
that any review, re-transmission, conversion to hard copy, copying, 
circulation or other use of this message and any attachments is strictly 
prohibited. If you are not the intended recipient, please notify the 
sender immediately by return email and delete this message and any 
attachments from your system.
Virus Warning: Although the company has taken reasonable precautions to 
ensure no viruses are present in this email. The company cannot accept 
responsibility for any loss or damage arising from the use of this email 
or attachment."

"Confidentiality Warning: This message and any attachments are intended 
only for the use of the intended recipient(s), are confidential and may be 
privileged. If you are not the intended recipient, you are hereby notified 
that any review, re-transmission, conversion to hard copy, copying, 
circulation or other use of this message and any attachments is strictly 
prohibited. If you are not the intended recipient, please notify the 
sender immediately by return email and delete this message and any 
attachments from your system.
Virus Warning: Although the company has taken reasonable precautions to 
ensure no viruses are present in this email. The company cannot accept 
responsibility for any loss or damage arising from the use of this email 
or attachment."

"Confidentiality Warning: This message and any attachments are intended 
only for the use of the intended recipient(s), are confidential and may be 
privileged. If you are not the intended recipient, you are hereby notified 
that any review, re-transmission, conversion to hard copy, copying, 
circulation or other use of this message and any attachments is strictly 
prohibited. If you are not the intended recipient, please notify the 
sender immediately by return email and delete this message and any 
attachments from your system.
Virus Warning: Although the company has taken reasonable precautions to 
ensure no viruses are present in this email. The company cannot accept 
responsibility for any loss or damage arising from the use of this email 
or attachment."_______________________________________________
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss




_______________________________________________
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to