Can you paste the relations part of the juju status output that shows those
relations in place? Perhaps there's an issue with the charm logic and
breaking and re-adding the relations might kick it into gear?



On Wed, Jan 4, 2017 at 2:48 AM Mac Lin <mkl0...@gmail.com> wrote:

> Really appreciate for the help. Have been stuck here for weeks.
>
> AFAIK, no. I attached two result of "juju status" on CloudLab(good) and my
> server (bad). It's supposed to work without expose the ports. And I just
> noticed that on my server, most of the services are in blocked status. But
> it seems the missing parts are fulfilled. For example, the ceilometer is
> complaining missing messaging, identity, and database relations, but the
> relation does exist.
>
> Please let me know if any info needed.
>
>   ceilometer:
>     charm: cs:trusty/ceilometer-17
>     exposed: false
>     service-status:
>       current: blocked
>       message: 'Missing relations: messaging, identity, database'
>       since: 03 Jan 2017 19:39:55Z
>     relations:
>       amqp:
>       - rabbitmq-server
>       ceilometer-service:
>       - ceilometer-agent
>       cluster:
>       - ceilometer
>       identity-service:
>       - keystone
>       juju-info:
>       - nagios
>       nrpe-external-master:
>       - nrpe
>       shared-db:
>       - mongodb
>     units:
>       ceilometer/0:
>         workload-status:
>           current: blocked
>           message: 'Missing relations: messaging, identity, database'
>           since: 03 Jan 2017 19:39:55Z
>         agent-status:
>           current: executing
>           message: running install hook
>           since: 03 Jan 2017 14:53:31Z
>           version: 1.25.9
>         agent-state: started
>         agent-version: 1.25.9
>         machine: "9"
>         open-ports:
>         - 8777/tcp
>         public-address: ceilometer.cord.lab
>
>
>
> On Tue, Jan 3, 2017 at 8:23 PM, Rick Harding <rick.hard...@canonical.com>
> wrote:
>
> Has juju expose been run on the applications? The charm can declare that
> these ports are the ones that are opened if exposed, it's not actually set
> until the operator runs the juju expose command on the application.
>
> On Sun, Jan 1, 2017 at 2:58 AM Mac Lin <mkl0...@gmail.com> wrote:
>
>
> Hi,
>
> I'm running CORD master/cord-in-a-box.sh on an x86_64 server. The same
> script on CloudLab has no problem.
>
> If I log into each failed service(lxc), I could found the port is not
> listened because service is not up, which is due to not configured
> properly. The configuration files are mostly at its initial state.
>
> How could I let juju to generate the configuration for the services
> manually?
>
> TASK [juju-finish : Wait for juju services to have open ports]
> *****************
> Saturday 31 December 2016  12:04:22 +0000 (0:00:04.738)       0:09:13.981
> *****
> failed: [10.100.198.201] (item={u'service': u'ceilometer',
> u'ipv4_last_octet': 20, u'name': u'ceilometer-1', u'forwarded_ports':
> [{u'int': 8777, u'ext': 8777}], u'aliase
> s': [u'ceilometer']}) => {"elapsed": 1800, "failed": true, "item":
> {"aliases": ["ceilometer"], "forwarded_ports": [{"ext": 8777, "int":
> 8777}], "ipv4_last_octet": 20, "n
> ame": "ceilometer-1", "service": "ceilometer"}, "msg": "Timeout when
> waiting for ceilometer-1:8777"}
> failed: [10.100.198.201] (item={u'service': u'glance', u'ipv4_last_octet':
> 30, u'name': u'glance-1', u'forwarded_ports': [{u'int': 9292, u'ext':
> 9292}], u'aliases': [u'g
> lance']}) => {"elapsed": 1800, "failed": true, "item": {"aliases":
> ["glance"], "forwarded_ports": [{"ext": 9292, "int": 9292}],
> "ipv4_last_octet": 30, "name": "glance-1"
> , "service": "glance"}, "msg": "Timeout when waiting for glance-1:9292"}
> failed: [10.100.198.201] (item={u'service': u'keystone',
> u'ipv4_last_octet': 40, u'name': u'keystone-1', u'forwarded_ports':
> [{u'int': 35357, u'ext': 35357}, {u'int': 49
> 90, u'ext': 4990}, {u'int': 5000, u'ext': 5000}], u'aliases':
> [u'keystone']}) => {"elapsed": 1800, "failed": true, "item": {"aliases":
> ["keystone"], "forwarded_ports": [
> {"ext": 35357, "int": 35357}, {"ext": 4990, "int": 4990}, {"ext": 5000,
> "int": 5000}], "ipv4_last_octet": 40, "name": "keystone-1", "service":
> "keystone"}, "msg": "Timeo
> ut when waiting for keystone-1:35357"}
> ok: [10.100.198.201] => (item={u'service': u'nagios', u'ipv4_last_octet':
> 60, u'name': u'nagios-1', u'forwarded_ports': [{u'int': 80, u'ext': 3128}],
> u'aliases': [u'nagi
> os']})
> failed: [10.100.198.201] (item={u'service': u'neutron-api',
> u'ipv4_last_octet': 70, u'name': u'neutron-api-1', u'forwarded_ports':
> [{u'int': 9696, u'ext': 9696}], u'alia
> ses': [u'neutron-api']}) => {"elapsed": 1800, "failed": true, "item":
> {"aliases": ["neutron-api"], "forwarded_ports": [{"ext": 9696, "int":
> 9696}], "ipv4_last_octet": 70
> , "name": "neutron-api-1", "service": "neutron-api"}, "msg": "Timeout
> when waiting for neutron-api-1:9696"}
> failed: [10.100.198.201] (item={u'service': u'nova-cloud-controller',
> u'ipv4_last_octet': 80, u'name': u'nova-cloud-controller-1',
> u'forwarded_ports': [{u'int': 8774, u'
> ext': 8774}], u'aliases': [u'nova-cloud-controller']}) => {"elapsed":
> 1800, "failed": true, "item": {"aliases": ["nova-cloud-controller"],
> "forwarded_ports": [{"ext": 87
> 74, "int": 8774}], "ipv4_last_octet": 80, "name":
> "nova-cloud-controller-1", "service": "nova-cloud-controller"}, "msg": 
> "Timeout
> when waiting for nova-cloud-controller-
> 1:8774"}
> failed: [10.100.198.201] (item={u'service': u'openstack-dashboard',
> u'ipv4_last_octet': 90, u'name': u'openstack-dashboard-1',
> u'forwarded_ports': [{u'int': 80, u'ext':
> 8080}], u'aliases': [u'openstack-dashboard']}) => {"elapsed": 1800,
> "failed": true, "item": {"aliases": ["openstack-dashboard"],
> "forwarded_ports": [{"ext": 8080, "int":
>  80}], "ipv4_last_octet": 90, "name": "openstack-dashboard-1", "service":
> "openstack-dashboard"}, "msg": "Timeout when waiting for
> openstack-dashboard-1:80"}
>         to retry, use: --limit
> @/cord/build/platform-install/cord-deploy-openstack.retry
>
>
> I tried to reset juju and lxc, but then it stuck at "Obtain Juju Facts for
> creating machines"
> sudo juju destroy-environment manual
> sudo lxc delete $(sudo lxc list | grep ^\| | grep NAME -v | cut -d' ' -f
> 2) --force
>
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju

Reply via email to