Re: usr and pwd of openstack------>答复: juju deploy failure on nova

2015-11-24 Thread Billy Olsen
Hi Cathy,

The easiest way is to simply set the admin-password field for the keystone
service, juju set keystone admin-password=mypassword.

For mysql are you using mysql or percona?

- Billy

On Mon, Nov 23, 2015 at 5:51 AM, wuwenbin  wrote:

> Hi Billy:
>
>You’re right about relationship. Now another problem occurs that
> username and password to log openstack-dashboard can’t be found. I check
> keystone config.yaml, and  find related config below, while that doesn’t
> work. I also try to use mysql while password is also not known. So if you
> know what they are or how to find them, please help me.
>
>Thanks.
>
> Best regards
>
> Cathy
>
>
>
> admin-user:
>
> default: admin
>
> type: string
>
> description: Default admin user to create and manage.
>
>   admin-password:
>
> default: None
>
> type: string
>
> description: |
>
>   Admin password. To be used *for testing only*. Randomly generated by
>
>   default.
>
>
>
> *发件人:* Billy Olsen [mailto:billy.ol...@canonical.com]
> *发送时间:* 2015年11月23日 1:57
> *收件人:* wuwenbin
> *抄送:* ad...@canonical.com; jiangrui (D); juju@lists.ubuntu.com;
> Weidong.Shao; Qinchuan; Ashlee Young; Zhaokexue
> *主题:* Re: juju deploy failure on nova
>
>
>
> Hi Cathy,
>
>
>
> These messages are used to indicate why the service is not a fully
> functional service.. As you point out, the software packages for the base
> charm service has been installed at this point in time, but needs
> additional relations in order to make the service a fully functioning
> service. These are normally expected messages when the service has just
> been deployed.
>
>
>
> If you proceed at this point to adding the missing relations, you'll see
> this message disappear from the status output.
>
>
>
> Thanks,
>
>
>
> Billy
>
>
>
>
>
> On Sat, Nov 21, 2015 at 1:59 AM, wuwenbin  wrote:
>
> Hi Adam:
>
>  I download trusty codes and use juju to deploy openstack. While
> there are problems about nova-cloud-controller and nova-compute.
>
>  Error info is as follows. I think that installing charm is an
> independent operation because later we will add relationship between those
> charms. I have no idea what’t going on.
>
>  Looking forward for your replay.
>
>  Thanks.
>
> Best regards
>
> Cathy
>
>
>
> Error info:
>
> nova-cloud-controller:
>
> charm: local:trusty/nova-cloud-controller-501
>
> exposed: false
>
> service-status:
>
>   current: blocked
>
>   message: 'Missing relations: messaging, image, compute, identity,
> database'
>
>   since: 21 Nov 2015 12:44:35+08:00
>
> relations:
>
>   cluster:
>
>   - nova-cloud-controller
>
> units:
>
>   nova-cloud-controller/0:
>
> workload-status:
>
>   current: blocked
>
>   message: 'Missing relations: messaging, image, compute,
> identity, database'
>
>   since: 21 Nov 2015 12:44:35+08:00
>
> agent-status:
>
>   current: idle
>
>   since: 21 Nov 2015 16:39:38+08:00
>
>   version: 1.25.0.1
>
> agent-state: started
>
> agent-version: 1.25.0.1
>
> machine: "5"
>
> open-ports:
>
> - /tcp
>
> - 8773/tcp
>
> - 8774/tcp
>
> - 9696/tcp
>
> public-address: 192.168.122.242
>
>   nova-compute:
>
> charm: local:trusty/nova-compute-133
>
> exposed: false
>
> service-status:
>
>   current: blocked
>
>   message: 'Missing relations: messaging, image'
>
>   since: 21 Nov 2015 16:40:46+08:00
>
> relations:
>
>   compute-peer:
>
>   - nova-compute
>
> units:
>
>   nova-compute/0:
>
> workload-status:
>
>   current: blocked
>
>   message: 'Missing relations: messaging, image'
>
>   since: 21 Nov 2015 16:40:46+08:00
>
> agent-status:
>
>   current: idle
>
>   since: 21 Nov 2015 16:40:48+08:00
>
>   version: 1.25.0.1
>
> agent-state: started
>
> agent-version: 1.25.0.1
>
> machine: "1"
>
> public-address: 192.168.122.56
>
>
>
> log info:
>
> unit-nova-compute-0[3088]: 2015-11-21 08:50:56 WARNING
> unit.nova-compute/0.juju-log server.go:268 messaging relation is missing
> and must be related for functionality.
>
> unit-nova-compute-0[3088]: 2015-11-21 08:50:56 WARNING
> unit.nova-compute/0.juju-log server.go:268 image relation is missing and
> must be related for functionality.
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
>
>
>
> --
>
> Billy Olsen
>
> billy.ol...@canonical.com
>
> Software Engineer
> Canonical USA
>



-- 
Billy Olsen

billy.ol...@canonical.com
Software Engineer
Canonical USA
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Tuning ceph

2015-11-24 Thread Pshem Kowalczyk
Hi,

I'm relatively new to the juju ecosystem. I've built a test/POC openstack
setup using juju charms. Ceph is used as the backend-storage system for the
deployment. Since the production deployment of this system has to meet some
external requirements (particular CRUSH settings, recovery times etc) I'll
have to tune ceph settings a bit.

The charm itself doesn't seem to have a section to add that information
(some other charms do have that ability). What's the best way of doing it?

In general case, I've realised that sometimes it would be useful to have
ability to run some actions after juju has finished its configuration to
fine-tune it to particular requirements (without losing the advantages of
using juju for all the dependencies). Is it possible to do something like
that without building my own charms?

kind regards
Pshem
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: The future of Charm Helpers

2015-11-24 Thread Stuart Bishop
On 24 November 2015 at 01:53, Cory Johns  wrote:

> For example, while I haven't worked with leadership so can't really speak to
> that, I haven't felt the need to translate workload statuses into reactive
> states and vice versa, and have had no issues with calling
> hookenv.status_set() directly.  In particular,

I think the only bits of hookenv.py that make sense to be 'reactive
aware' are the leadership settings and, to a lesser extent, workload
status. Here we have hook environment that can be mutated by the
charm, and which other parts of the charm or other layers may need to
react to.

However, now I've thought about this more having my layer react to
side effects your layer made to the hook environment is rather
dubious. I really should be reacting to state your layer publishes,
even if that requires fixing or forking your layer.

So yes, I'm fine with just a low level hookenv.py replacement that
doesn't set magic states. Please consider the examples below off topic
:)

> https://github.com/johnsca/layer-apache-spark/blob/master/reactive/spark.py
> is roughly how I think about managing workload status in a reactive charm.
> (Of course, if that approach is missing some significant benefit to another
> approach, please let me know!)

@when_not('leadership.set.password')
@when_not('workloadstatus.blocked')
def wait_for_leadership():
charms.hookenv.status_set('waiting', 'Waiting for leader to lead')

This is how I'm toying with handling waiting and blocked workload
states. If state is such that work cannot be completed until a future
hook, set the waiting status. But only if the status is not already
set to blocked, because overriding a blocked status and message would
hide a problem from the operator. It seems very similar to what you
are doing with spark.py, except that with your code your waiting
states would override and hide a blocked state set by a different
layer. As an example, the apt layer I've got will set the workload
state to 'blocked' if requested packages cannot be installed.

But leadership states are certainly more interesting to me than workload status:

@when('leadership.is_leader')
@when_not('leadership.set.password')
def set_cluster_password():
charms.hookenv.leader_set(password=mkpass())

@when('leadership.changed.password')
def store_creds():
rewrite(os.path.expanduser('~root/.cqlshrc', '''
   user=foo
   password={}
   '''.format(charms.hookenv.leader_get('password')))

Here the store_creds method gets invoked on all units. The reactive
state can now reach beyond a single unit, with the leader triggering
handlers on its peers. On the leader, the store_creds handler will be
invoked in the same hook that set_cluster_password was invoked (the
first one in this simple example, so storage-* or install). On the
other units, it will be invoked in a hook run soon after the leader
set the setting. Perhaps the install hook if the leader ran first,
perhaps some later hook, most likely the leader-settings-changed hook
or perhaps the leader-elected hook if leadership failover occurred at
the right moment.

-- 
Stuart Bishop 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju