Re: [openstack-dev] [charms] Propose Felipe Reyes for OpenStack Charmers team

2018-09-12 Thread Liam Young
+1 and thanks for all your contributions Felipe!

On Wed, Sep 12, 2018 at 6:51 AM Chris MacNaughton <
chris.macnaugh...@canonical.com> wrote:

> +1 Felipe has been a solid contributor to the Openstack Charms for some
> time now.
>
> Chris
>
> On 11-09-18 23:07, Ryan Beisner wrote:
>
> +1  I'm always happy to see Felipe's contributions and fixes come through.
>
> Cheers!
>
> Ryan
>
>
>
>
> On Tue, Sep 11, 2018 at 1:10 PM James Page 
> wrote:
>
>> +1
>>
>> On Wed, 5 Sep 2018 at 15:48 Billy Olsen  wrote:
>>
>>> Hi,
>>>
>>> I'd like to propose Felipe Reyes to join the OpenStack Charmers team as
>>> a core member. Over the past couple of years Felipe has contributed
>>> numerous patches and reviews to the OpenStack charms [0]. His experience
>>> and knowledge of the charms used in OpenStack and the usage of Juju make
>>> him a great candidate.
>>>
>>> [0] -
>>>
>>> https://review.openstack.org/#/q/owner:%22Felipe+Reyes+%253Cfelipe.reyes%2540canonical.com%253E%22
>>>
>>> Thanks,
>>>
>>> Billy Olsen
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Guests not getting metadata in a Cellsv2 deploy

2018-08-03 Thread Liam Young
fwiw this appears to be due to a bug in nova. I've raised
https://bugs.launchpad.net/nova/+bug/1785235 and proposed a fix
https://review.openstack.org/588520

On Thu, Aug 2, 2018 at 5:47 PM Liam Young  wrote:

> Hi,
>
> I have a fresh pike deployment and the guests are not getting metadata. To
> investigate it further it would really help me to understand what the
> metadata flow is supposed to look like.
>
> In my deployment the guest receives a 404 when hitting
> http://169.254.169.254/latest/meta-data. I have added some logging to
> expose the messages passing via amqp and I see the nova-api-metadata
> service making a call to the super-conductor asking for an InstanceMapping.
> The super-conductor sends a reply detailing which cell the instance is in
> and the urls for both mysql and rabbit. The nova-api-metadata service then
> sends a second message to the superconductor this time asking for
> an Instance obj. The super-conductor fails to find the instance and returns
> a failure with a "InstanceNotFound: Instance  could not be found"
> message, the  nova-api-metadata service then sends a 404 to the original
> requester.
>
> I think the super-conductor is looking in the wrong database for the
> instance information. I believe it is looking in cell0 when it should
> actually be connecting to an entirely different instance of mysql which is
> associated with the cell that the instance is in.
>
> Should the super-conductor even be trying to retrieve the instance
> information or should the nova-api-metadata service actually be messaging
> the conductor in the compute cell?
>
> Any pointers gratefully received!
> Thanks
> Liam
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Guests not getting metadata in a Cellsv2 deploy

2018-08-02 Thread Liam Young
Hi,

I have a fresh pike deployment and the guests are not getting metadata. To
investigate it further it would really help me to understand what the
metadata flow is supposed to look like.

In my deployment the guest receives a 404 when hitting
http://169.254.169.254/latest/meta-data. I have added some logging to
expose the messages passing via amqp and I see the nova-api-metadata
service making a call to the super-conductor asking for an InstanceMapping.
The super-conductor sends a reply detailing which cell the instance is in
and the urls for both mysql and rabbit. The nova-api-metadata service then
sends a second message to the superconductor this time asking for
an Instance obj. The super-conductor fails to find the instance and returns
a failure with a "InstanceNotFound: Instance  could not be found"
message, the  nova-api-metadata service then sends a 404 to the original
requester.

I think the super-conductor is looking in the wrong database for the
instance information. I believe it is looking in cell0 when it should
actually be connecting to an entirely different instance of mysql which is
associated with the cell that the instance is in.

Should the super-conductor even be trying to retrieve the instance
information or should the nova-api-metadata service actually be messaging
the conductor in the compute cell?

Any pointers gratefully received!
Thanks
Liam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms]

2018-02-19 Thread Liam Young
On Mon, Feb 19, 2018 at 9:05 AM, Dmitrii Shcherbakov
 wrote:
> Hi Liam,
>
>> I was recently looking at how to support custom configuration that relies
>> on post deployment setup.
>
> I would describe the problem in general as follows:
>
> 1) charms can get context not only from Juju (config options, relation data,
> leader data), environment (operating system release, OpenStack release,
> services running etc.) but also from a stateful data store (e.g. a Keystone
> database);
> 2) it's not easy to track application state from a charm because:
> authentication is needed to fetch persistent state, notifications from a
> data store cannot be reliably set up because charm code is ran periodically
> and it is not always present in memory (polling is neither timely nor
> efficient). Another problem is that software that holds the state needs to
> support data change notifications which raises version compatibility
> questions.
>
> By using actions we move the responsibility for data retrieval and change
> notifications to an operator but a more generic scenario would be modeling a
> feedback loop from an application to Juju as a modeling system where changes
> can be either automatic or gated by an operator (an orchestrator). Making it
> automatic would mean that a service would get notifications/poll data from a
> state store and would be authorized to use Juju client to make certain
> changes.

This is an interesting idea, but there is no such mechanism within
Juju that I know of.

>
> Another problem to solve is maintenance of that state: if we start
> maintaining a key-value DB in leader settings we need to think about data
> migration over time and how to access the current state.

Data migration from where to where? We access the current state by retrieving
the data from leader db, or am I missing something here?

> In other words, in
> CRUD, the "C" part is relatively straightforward, "R" is more complicated
> with large data sets (if I have a lot of leader data, how do I interpret it
> efficiently?),

Perhaps I'm being naive but I don't see these developing into data
sets large enough
to cause performance problems.

> "UD" is less clear - seems like there will have to be 3 or 4
> actions per feature for C, [R], U and D or one action that can multiplex
> commands.

Each time the action is run the context associated with the action is deleted
and recreated. If an action argument is unset I guess we could interpret that as
leave-unchanged.

>
> This brings me to the question of how is it different from state-specific
> config values with a complex structure.

To my mind the difference is complexity for the end user. An action has clearly
defined arguments and the charm action code looks after forming this into
the correct context.

> Instead of leader data, a per-charm
> config option could hold state data in some format namespaced by a feature
> name or config file name to render. A data model would be needed to make
> sure we can create versioned application-specific state buckets (e.g. for
> upgrades, hold both states, then remove the old one).
>
> Application version-specific config values is something not modeled in Juju
> although custom application versions are present
> (https://jujucharms.com/docs/2.3/reference-hook-tools#application-version-set).
> Version information has to be set via a hook tool which means that it has to
> come from a custom config option anyway. Each charm has its own method to
> specify an application version and config dependencies are not modeled
> explicitly - one has to implement that logic in a charm without any Juju API
> for charms present the way I see it.
>
> config('key', 'app-version') - would be something to aim for.
>
> Do you have any thoughts about leader data vs a special complex config
> option per charm and versioning?
>
> Thanks!

Thanks for the feedback Dmitrii

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms]

2018-02-16 Thread Liam Young
Hi,

I was recently looking at how to support custom configuration that relies on
post deployment setup. Specifically about how to support designate optional
configuration for the sink service. The configuration lives on the application
units but needs the domain id of the designate domain that the records should
be created in. This domain is created post-deployment and, obviously, the uuid
of the domain will change on each deployment.

I would like to propose doing this through post deployment actions and I think
the general approach will be useful across multiple charms. The charm can have
pre-defined custom config which can be enabled through an action. The action
parameters also provide an additional context for rendering the template which
includes data from the post deployment setup. This approach does not allow
arbitrary config to be injected, instead it allows predefined config to
activated via actions.

To illustrate the approach I'll stick with the designate example:

1) Cloud deployed and administrator sets up new domain
2) Administrator runs a new add-sink-config action and passes the domain-id
   and sink config file name.
3) The lead unit updates a map in the leader db which lists additional config
   files and corresponding context derived from the action options. Each
   set of config stores its options in its own namespace.
4) The lead unit then triggers config to be rerendered locally.
5) Non-lead units are triggered by the leader-settings-changed hook and also
   rerender their configs.

Here is a prototype for the designate charm: https://goo.gl/CJj2Rh

Any thoughts, objections, you-haven;t-thought-of-this' gratefully recieved
Liam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms]

2018-02-14 Thread Liam Young
Hi,

I would like to propose that we do not support the notifications
method for automatically creating DNS records in Queens+. This method
for achieving Neutron integration has been superseded both upstream
and in the charms. By removing support for it in Queens we prevent the
charm from attempting to make designate v1 api calls for Queens+ which
is a positive thing given it will have been removed (
https://docs.openstack.org/releasenotes/designate/queens.html#critical-issues
).

Thanks
Liam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] evolving the ha interface type

2018-01-04 Thread Liam Young
Hi James,
I like option 2 but I think there is a problem with it. I don't
think the hacluster charm sets any data down the relation with the
principle until it has first received data from the principle. As I
understand it option 2 would change this behaviour so that hacluster
immediately sets an api-version option for the principle to consume.
The only problem is the principle does not know whether to wait for
this api-version information or not. eg when the principle is deciding
whether to json encode its data it cannot differentiate between:

a) An old version of the hacluster charm which does not support
api-version or json
b) A new version of the hacluster charm which has not set the api-version yet.

Thanks
Liam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] [designate] Domain and server creation during deployment

2017-11-22 Thread Liam Young
The Designate sink service relies on sink file(s) that contain the domain
id(s) of the domains that automatically generated records should be added
to.
At the moment the designate charm creates a server and domains during charm
installation if the neutron-domain and/or nova-domain config options have
been
set. It then renders the sink files accordingly. This is done via the
designate cli and obviously relies on the keystone API and designate API
services being up and avaliable. This is unsuprisngly proving to be
unreliable
and racey, particularly during HA deployments.

The heat charm has a similiar issue in that it relies on a domain to have
been
created before heat can be used. But rather than try and create the domain
during charm installation the heat charm exposes an action which should be
run post-installation to create the domain.

I think that the designate charm should be updated to work in a similar way
to
the heat charm and that the server and domain creation and sink file
rendering
should be done via a post-deployment action rather than during deployment
time. There is a complication to this approach. All the designate API units
will need to render sink configurations containing the domain id(s) once the
creation action has run. I can think of two similar ways to achieve this:

1) Expose a server and domain creation action that must be run on the
leader.
   During the action the leader then sets the domain ids via the leader db.
   The non-leaders can then respond to leader-settings-changed and render
   their sink file(s). Storing the sink config in the leader-db also has the
   advantage that if the designate service is scaled out at a later date
then
   the new unit will still have access to the sink configuration and can
   render the sink files.

2) A very similar approach would be to push the creation of servers and
   domains back to the administrator to perform and expose a generic action
   for creating sink files which accepts the domain id as one of its
   arguments. Again this would need to be run on the leader and propogated
via
   leader-settings

I'm inclined to do option 2 does anyone have any objections or suggestions
for an
alternative approach ?

Thanks
Liam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Subject: [Keystone][Tempest][QA] Tempest full fails with policy.v3cloudsample.json and gate is using old policy.json

2017-01-24 Thread Liam Young
Hi,

Firstly, apologies for the cross post from openstack@l.o.o but I think this
is a more appropriate mailing list and I'd like to add some more
information.

I have been running tempest full against a Keystone v3 enabled cloud using
the stable newton policy.v3cloudsample.json *1 and it is failing for me. I
then checked what was happening at Keystone gate *2 and saw that the v3
gate jobs appear to be using the old policy.json *3 which I assume is
deprecated for v3 as granting the admin role on anything in-effect gives a
user cloud-admin.

My questions are:
1) Should gate be using policy.v3cloudsample.json to run v3 tests?
2) Should I expect a tempest full run to pass against a Newton deployment
using policy.v3cloudsample.json ?

What I'm seeing is that some tests (like
tempest.api.compute.admin.test_quotas) fail when they try and list_domains.
This seems to be because the test creates:

1) A new project in the admin domain
2) A new user in the admin domain
3) Grants the admin role on the new project to the new user.

The test then authenticates with the new users credentials and attempts to
list_domains. The policy.json, however, has:


"cloud_admin": "role:admin and (token.is_admin_project:True or
domain_id:363ab68785c24c81a784edca1bceb935)",
...
"identity:list_domains": "rule:cloud_admin",

>From tempest I see:

==
FAIL:
tempest.api.compute.admin.test_quotas.QuotasAdminTestJSON.test_delete_quota[id-389d04f0-3a41-405f-9317-e5f86e3c44f0]
tags: worker-0
--
Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{2017-01-23 15:57:09,806 2014 INFO
[tempest.lib.common.rest_client] Request
(QuotasAdminTestJSON:test_delete_quota): 403 GET
http://10.5.36.109:35357/v3/domains?name=admin_domain 0.066s}}}

Traceback (most recent call last):
  File "tempest/api/compute/admin/test_quotas.py", line 128, in
test_delete_quota
project = self.identity_utils.create_project(name=project_name,
  File "tempest/test.py", line 470, in identity_utils
project_domain_name=domain)
  File "tempest/lib/common/cred_client.py", line 210, in get_creds_client
roles_client, domains_client, project_domain_name)
  File "tempest/lib/common/cred_client.py", line 142, in __init__
name=domain_name)['domains'][0]
  File "tempest/lib/services/identity/v3/domains_client.py", line 57, in
list_domains
resp, body = self.get(url)
  File "tempest/lib/common/rest_client.py", line 290, in get
return self.request('GET', url, extra_headers, headers)
  File "tempest/lib/common/rest_client.py", line 663, in request
self._error_checker(resp, resp_body)
  File "tempest/lib/common/rest_client.py", line 755, in _error_checker
raise exceptions.Forbidden(resp_body, resp=resp)
tempest.lib.exceptions.Forbidden: Forbidden
Details: {u'message': u'You are not authorized to perform the requested
action: identity:list_domains', u'code': 403, u'title': u'Forbidden'}

In the keystone log I see:

(keystone.policy.backends.rules): 2017-01-23 15:35:57,198 DEBUG enforce
identity:list_domains: {'is_delegated_auth': False,
'access_token_id': None,
'user_id': u'3fd9e70825d648d996080d855cf9c181',
'roles': [u'Admin'],
'user_domain_id': u'363ab68785c24c81a784edca1bceb935',
'consumer_id': None,
'trustee_id': None,
'is_domain': False,
'trustor_id': None,
'token': ,
'project_id': u'b48ba24e96d84de4a48077b9310faac7',
'trust_id': None,
'project_domain_id': u'363ab68785c24c81a784edca1bceb935'}
(keystone.common.wsgi): 2017-01-23 15:35:57,199 WARNING You are not
authorized to perform the requested action: identity:list_domains

This appears to be project scoped. If I update the policy.json to grant
cloud_admin if the project is the admin domain then that seems to fix
things. The change I'm trying is:

 3c3,4
< "cloud_admin": "role:admin and (token.is_admin_project:True or
domain_id:admin_domain_id)",
---
> "bob": "project_domain_id:363ab68785c24c81a784edca1bceb935 or
domain_id:363ab68785c24c81a784edca1bceb935",
> "cloud_admin": "role:admin and (token.is_admin_project:True or
rule:bob)",

I did notice this comment on Bug #1451987 *4:

If you see following errors for all identity api v3 tests, then please be
known that its not a a bug in tempest, rather you need to change keystone
v3 policy.json and make it more relaxed so tempest can authorize with users
created for each test with separate projects(tenants) because we set
tenant_isolation to True in tempest.conf ...

This suggests to me that it is expected for policy.json to need tweaking.

Regards
Liam

*1
https://github.com/openstack/keystone/blob/stable/newton/etc/policy.v3cloudsample.json
*2
http://logs.openstack.org/66/418166/10/check/gate-keystone-dsvm-functional-v3-only-ubuntu-xenial-nv/fc0af39/logs/etc/keystone/policy.json.txt.gz
*3 https://github.com/openstack/keystone/blob/master/etc/policy.json
*4 

Re: [openstack-dev] [charms] monitoring interface

2017-01-19 Thread Liam Young
Hi Brad,

Thanks for looking into it. I think things should actually work out of the
box as they are now. So,

juju deploy nrpe nrpe-glance
juju deploy nrpe nrpe-cinder
juju deploy nagios
juju deploy glance
juju deploy cinder
juju add-relation nrpe-glance glance
juju add-relation nrpe-glance nagios
juju add-relation nrpe-cinder cinder
juju add-relation nrpe-cinder nagios

Should add nagios checks for glance and cinder to the juju deployed nagios.
(Taken from
https://wiki.ubuntu.com/OpenStack/OpenStackCharms/ReleaseNotes1504#Monitoring
).

Ideally we would rename the nrpe-external-master interface to local-monitor
(or add it as an additional interface) but that is not needed to get it up
and running.

Thanks
Liam

On 18 January 2017 at 16:07, Brad Marshall 
wrote:

> Hi all,
>
> We're looking at adding the monitor interface to the openstack charms to
> enable us to use the nagios charm, rather than via an external nagios
> using nrpe-external-master.
>
> I believe this will just be a matter of adding in the interface, adding
> an appropriate monitor.yaml that defines the checks, and updating
> charmhelpers.contrib.charmsupport.nrpe so that when it adds checks, it
> passes the appropriate information onto the relationship.
>
> Are there any concerns with this approach? Any suggestions on things to
> watch out for?  It does mean touching every charm, but I can't see any
> other way around it.
>
> Thanks,
> Brad
> --
> Brad Marshall
> Cloud Reliability Engineer
> Bootstack Squad, Canonical
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] Default KeystoneV3 Creds

2016-11-11 Thread Liam Young
On Fri, Nov 11, 2016 at 12:41 AM, James Beedy  wrote:

> Concerning the Juju Keystone charm, and api v3, can someone shed some
> light on how to find the default admin creds needed to access the keystone
> api, or what the novarc file might look like? I'm setting the
> 'admin-password' config of the keystone charm to "openstack", and am using
> this -> http://paste.ubuntu.com/23458768/ for my .novarc, but am getting
> a 401 -> http://paste.ubuntu.com/23458782/
>

https://wiki.ubuntu.com/OpenStack/OpenStackCharms/ReleaseNotes1604#Keystone_v3_API_support
should point you in the right direction


>
> Is there something I'm missing here?
>
> thx
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] project mascot - cobra

2016-07-26 Thread Liam Young
Hi Heidi,

We'd like to go for a Cobra as our first choice ('[snake] charming
openstack') and as a backup a giant squid  ('many armed animal managing
openstack'). It was discussed on the mailing list and voted for in
yesterdays OpenStack Charms IRC meeting.

Thanks
Liam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate][charms] Synchronising domains with new bind9 server

2016-07-01 Thread Liam Young
Hi,

I'm trying to add a new bind9 pool_target to an existing pool. The problem
is that the new bind server has no knowledge of the existing zones as it
missed the addzone commands when the domains where created. It seems to me
I have 3 options:

1) To sync the zone + nzf files from an existing bind9 pool_target
2) Write a script to extract a list of domains for all tenants from
designate and convert those into  "rndc addzone" commands targeted at the
new unit
3) Some builtin designate method I've yet to discover?

What would you recommend?

I'm writing a designate and designate bind Juju charm and testing scaleout
which is what caused me to trip over this issue. So for option 1 I'll need
to synchronise a directory between Juju units of an application does anyone
have a neat way of doing this?
Thanks
Liam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev