Re: [openstack-dev] [kolla-ansible] how do I unify log data format

2018-07-14 Thread Rich Megginson

On 07/14/2018 07:29 AM, Sergey Glazyrin wrote:

Hello guys!
We are migrating our product to kolla-ansible and as far as probably 
you know, it uses fluentd to control logs, etc. In non containerized 
openstack we use rsyslog to send data to logstash.


Why not use rsyslog in containerized openstack too?
Why not use rsyslog to mutate/unify the records?  Why use logstash? Note 
that rsyslog can send records to elasticsearch, and the latest rsyslog 
8.36 has enhanced the elasticsearch plugin to do client cert auth as 
well as handle bulk index retries more efficiently.


We get data from syslog events. It looks like it's impossible to use 
syslog in kolla-ansible. Unfortunately external_syslog_server option 
doesn't work. Is there anyone who was able to use it ? But, nevermind, 
we may use fluentd BUT.. we have one problem - different data format 
for each service/container.


So, probably the most optimal solution is to use default logging idea 
in kolla-ansible. (to be honest, I am not sure... but I've no found 
better option). But even with default logging idea in kolla - ansible 
we have one serious problem. Fluentd has different data format for 
each service, for instance, you may see this link with explanation how 
its designed in kolla-ansible

https://github.com/openstack/kolla-ansible/commit/3026cef7cfd1828a27e565d4211692f0ab0ce22e
there are grok patterns which parses log messages, etc

so, we managed to put data to elasticsearch but we need to solve two 
problems:
1. unify data format for log events. We may solve it using logstash to 
unify it before putting it to elasticsearch (or should we change 
fluentd configs in our own version of kolla-ansible repository ? )

For instance, we may do it using this logstash plugin
https://www.elastic.co/guide/en/logstash/2.4/plugins-filters-mutate.html#plugins-filters-mutate-rename

What's your suggestion ?


--
Best, Sergey


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing David Moreau Simard part of Puppet OpenStack CI core team

2016-09-28 Thread Rich Megginson

On 09/28/2016 10:06 AM, Emilien Macchi wrote:

Until now, we had no specific team for dealing with Puppet OpenStack
CI (aka openstack/puppet-openstack-integration project).
But we have noticed that David was doing consistent work to contribute
to Puppet OpenStack CI by adding more coverage, but also helping when
things are broken.
David is always here to help us to make testing better.

David is working on RDO Infra and re-use Puppet OpenStack CI tooling
to test OpenStack, so he has a perfect knowledge at how Puppet
OpenStack CI is working.

I would like to request feedback from our community about creating
this new Gerrit group (where we would include existing Puppet
OpenStack core groups into it), and also include David into it.


+1 for David, whatever he's working on



Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Propose Sofer Athlan-Guyot (chem) part of Puppet OpenStack core

2016-07-28 Thread Rich Megginson

+1 - good guy

On 07/28/2016 09:58 AM, Ivan Berezovskiy wrote:

+1, good job!

2016-07-28 18:50 GMT+03:00 Matt Fischer >:


+1 from me!


On Jul 28, 2016 9:20 AM, "Emilien Macchi" > wrote:

You might not know who Sofer is but he's actually "chem" on IRC.
He's the guy who will find the root cause of insane bugs, in
OpenStack
in general but also in Puppet OpenStack modules.
Sofer has been working on Puppet OpenStack modules for a while
now,
and is already core in puppet-keystone. Many times he brought his
expertise to make our modules better.
He's always here on IRC to help folks and has excellent
understanding
at how our project works.

If you want stats:
http://stackalytics.com/?user_id=sofer-athlan-guyot=commits
I'm quite sure Sofer will make more reviews over the time but
I have
no doubt he fully deserves to be part of core reviewers now,
with his
technical experience and involvement.

As usual, it's an open decision, please vote +1/-1 about this
proposal.

Thanks,
--
Emilien Macchi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks, Ivan Berezovskiy
MOS Puppet Team Lead
at Mirantis 

slack: iberezovskiy
skype: bouhforever
phone: + 7-960-343-42-46



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Discussion of PuppetOpenstack Project abbreviation

2016-06-03 Thread Rich Megginson

On 06/03/2016 01:34 AM, Sergii Golovatiuk wrote:

I would vote for POSM - "Puppet OpenStack Modules"



+1 - possum, american slang for the animal "opossum"


--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Jun 1, 2016 at 7:25 PM, Cody Herriges > wrote:



> On Jun 1, 2016, at 5:56 AM, Dmitry Tantsur > wrote:
>
> On 06/01/2016 02:20 PM, Jason Guiditta wrote:
>> On 01/06/16 18:49 +0800, Xingchao Yu wrote:
>>>  Hi, everyone:
>>>
>>>  Do we need to give a abbreviation for PuppetOpenstack
project? B/C
>>>  it's really a long words when I introduce this project to
people or
>>>  writng article about it.
>>>
>>>  How about POM(PuppetOpenstack Modules) or POP(PuppetOpenstack
>>>  Project) ?
>>>
>>>  I would like +1 for POM.
>>>  Just an idea, please feel free to give your comment :D
>>>  Xingchao Yu
>>
>> For rdo and osp, we package it as 'openstack-puppet-modules',
or OPM
>> for short.
>
> I definitely love POM as it reminds me of pomeranians :) but I
agree that OPM will probably be easier recognizable.

The project's official name is in fact "Puppet OpenStack" so OPM
would be kinda confusing.  I'd put my vote on POP because it is
closer to the actual definition of an acronym[1], which I
generally find easier to remember over all when it comes to the
shortening of long phrases.

[1] http://www.merriam-webster.com/dictionary/acronym

--
Cody


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-06 Thread Rich Megginson

On 04/06/2016 10:38 AM, Hayes, Graham wrote:

On 06/04/2016 17:17, Rich Megginson wrote:

On 04/06/2016 02:55 AM, Hayes, Graham wrote:

On 06/04/16 03:09, Adam Young wrote:

On 04/05/2016 08:02 AM, Hayes, Graham wrote:

On 02/04/2016 22:33, Adam Young wrote:

I finally have enough understanding of what is going on with Tripleo to
reasonably discuss how to implement solutions for some of the main
security needs of a deployment.


FreeIPA is an identity management solution that can provide support for:

1. TLS on all network communications:
  A. HTTPS for web services
  B. TLS for the message bus
  C. TLS for communication with the Database.
2. Identity for all Actors in the system:
 A.  API services
 B.  Message producers and consumers
 C.  Database consumers
 D.  Keystone service users
3. Secure  DNS DNSSEC
4. Federation Support
5. SSH Access control to Hosts for both undercloud and overcloud
6. SUDO management
7. Single Sign On for Applications running in the overcloud.


The main pieces of FreeIPA are
1. LDAP (the 389 Directory Server)
2. Kerberos
3. DNS (BIND)
4. Certificate Authority (CA) server (Dogtag)
5. WebUI/Web Service Management Interface (HTTPD)





There are a couple ongoing efforts that will tie in with this:

1. Designate should be able to use the DNS from FreeIPA.  That was the
original implementation.

Designate cannot use FreeIPA - we haven't had a driver for it since
Kilo.

There have been various efforts since to support FreeIPA, but it
requires that it is the point of truth for DNS information, as does
Designate.

If FreeIPA supported the traditional Notify and Zone Transfer mechanisms
then we would be fine, but unfortunately it does not.

[1] Actually points out that the goal of FreeIPA's DNS integration
"... is NOT to provide general-purpose DNS server. Features beyond
easing FreeIPA deployment and maintenance are explicitly out of scope."

1 - http://www.freeipa.org/page/DNS#Goals

Lets table that for now. No reason they should not be able to
interoperate somehow.

Without work being done by FreeIPA (to enable the XFR interface on the
bind server) or us (Designate) re-designing our DNS Driver interface
they will not be able to inter-operate.

It's going to be very difficult for FreeIPA to support XFR for the
"main" zone (i.e. the zone in which records are actively
updated/maintained in LDAP and kept in sync globally).  It might be
possible to make it work for a child/sub zone that LDAP doesn't have to
pay much attention to, and let that zone be updated by Designate via
XFR.  I suppose Designate has the same problem with AD DNS integration?

Yes, we do. The MS DNS server has support for other secondary zones
that we could use - that is what we did in the pre Kilo driver.

(as a disclaimer, the msdns driver is known-broken, and unless there
is some resurgence of interest in it it will be deleted soon.)


If you want to discuss this more, we can take the discussion to
freeipa-us...@redhat.com

Will spin up a thread there - thanks.


The ipa/nova join functionality allows new VM hosts to be automatically
registered with IPA, including the DNS records for floating IP
assignments, bypassing Designate.

Ah, I did not realise there was work done on that. There was quite a bit
of work done this cycle to tie nova + neutron + designate together by
adding a "dns_name" to neutron ports - that is what we focused on.


The work that was done for nova/ipa integration:
* is specific to ipa - it uses ipa specific apis, files, commands, etc.
* does a lot more than just DNS registration - it configures the system 
to allow ssh into the system, to allow kerberos auth, HBAC including 
based on hostgroup, etc. - this is the demo I did for OpenStack Tokyo: 
http://richmegginson.livejournal.com/27573.html


Rob Crittenden, Juan Osorio Robles, and Adam Young have helped with this 
effort and have extended it since then.


It unfortunately relies on unsupported internal nova apis (hooks), and 
there will be a discussion in Austin about how to do this going forward.







2.  Juan Antonio Osorio  has been working on TLS everywhere.  The issue
thus far has been Certificate management.  This provides a Dogtag server
for Certs.

3. Rob Crittenden has been working on auto-registration of virtual
machines with an Identity Provider upon launch.  This gives that efforts
an IdM to use.

4. Keystone can make use of the Identity store for administrative users
in their own domain.

5. Many of the compliance audits have complained about cleartext
passwords in config files. This removes most of them.  MySQL supports
X509 based authentication today, and there is Kerberos support in the
works, which should remove the last remaining cleartext Passwords.

I mentioned Centralized SUDO and HBAC.  These are both tools that may be
used by administrators if so desired on the install. I would recommend
that they be used, but th

Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-06 Thread Rich Megginson

On 04/06/2016 02:55 AM, Hayes, Graham wrote:

On 06/04/16 03:09, Adam Young wrote:

On 04/05/2016 08:02 AM, Hayes, Graham wrote:

On 02/04/2016 22:33, Adam Young wrote:

I finally have enough understanding of what is going on with Tripleo to
reasonably discuss how to implement solutions for some of the main
security needs of a deployment.


FreeIPA is an identity management solution that can provide support for:

1. TLS on all network communications:
A. HTTPS for web services
B. TLS for the message bus
C. TLS for communication with the Database.
2. Identity for all Actors in the system:
   A.  API services
   B.  Message producers and consumers
   C.  Database consumers
   D.  Keystone service users
3. Secure  DNS DNSSEC
4. Federation Support
5. SSH Access control to Hosts for both undercloud and overcloud
6. SUDO management
7. Single Sign On for Applications running in the overcloud.


The main pieces of FreeIPA are
1. LDAP (the 389 Directory Server)
2. Kerberos
3. DNS (BIND)
4. Certificate Authority (CA) server (Dogtag)
5. WebUI/Web Service Management Interface (HTTPD)





There are a couple ongoing efforts that will tie in with this:

1. Designate should be able to use the DNS from FreeIPA.  That was the
original implementation.

Designate cannot use FreeIPA - we haven't had a driver for it since
Kilo.

There have been various efforts since to support FreeIPA, but it
requires that it is the point of truth for DNS information, as does
Designate.

If FreeIPA supported the traditional Notify and Zone Transfer mechanisms
then we would be fine, but unfortunately it does not.

[1] Actually points out that the goal of FreeIPA's DNS integration
"... is NOT to provide general-purpose DNS server. Features beyond
easing FreeIPA deployment and maintenance are explicitly out of scope."

1 - http://www.freeipa.org/page/DNS#Goals


Lets table that for now. No reason they should not be able to
interoperate somehow.

Without work being done by FreeIPA (to enable the XFR interface on the
bind server) or us (Designate) re-designing our DNS Driver interface
they will not be able to inter-operate.


It's going to be very difficult for FreeIPA to support XFR for the 
"main" zone (i.e. the zone in which records are actively 
updated/maintained in LDAP and kept in sync globally).  It might be 
possible to make it work for a child/sub zone that LDAP doesn't have to 
pay much attention to, and let that zone be updated by Designate via 
XFR.  I suppose Designate has the same problem with AD DNS integration?  
If you want to discuss this more, we can take the discussion to 
freeipa-us...@redhat.com


The ipa/nova join functionality allows new VM hosts to be automatically 
registered with IPA, including the DNS records for floating IP 
assignments, bypassing Designate.








2.  Juan Antonio Osorio  has been working on TLS everywhere.  The issue
thus far has been Certificate management.  This provides a Dogtag server
for Certs.

3. Rob Crittenden has been working on auto-registration of virtual
machines with an Identity Provider upon launch.  This gives that efforts
an IdM to use.

4. Keystone can make use of the Identity store for administrative users
in their own domain.

5. Many of the compliance audits have complained about cleartext
passwords in config files. This removes most of them.  MySQL supports
X509 based authentication today, and there is Kerberos support in the
works, which should remove the last remaining cleartext Passwords.

I mentioned Centralized SUDO and HBAC.  These are both tools that may be
used by administrators if so desired on the install. I would recommend
that they be used, but there is no requirement to do so.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Rich Megginson

On 04/05/2016 07:06 PM, Dan Prince wrote:

On Sat, 2016-04-02 at 17:28 -0400, Adam Young wrote:

I finally have enough understanding of what is going on with Tripleo
to
reasonably discuss how to implement solutions for some of the main
security needs of a deployment.


FreeIPA is an identity management solution that can provide support
for:

1. TLS on all network communications:
 A. HTTPS for web services
 B. TLS for the message bus
 C. TLS for communication with the Database.
2. Identity for all Actors in the system:
A.  API services
B.  Message producers and consumers
C.  Database consumers
D.  Keystone service users
3. Secure  DNS DNSSEC
4. Federation Support
5. SSH Access control to Hosts for both undercloud and overcloud
6. SUDO management
7. Single Sign On for Applications running in the overcloud.


The main pieces of FreeIPA are
1. LDAP (the 389 Directory Server)
2. Kerberos
3. DNS (BIND)
4. Certificate Authority (CA) server (Dogtag)
5. WebUI/Web Service Management Interface (HTTPD)

Of these, the CA is the most critical.  Without a centralized CA, we
have no reasonable way to do certificate management.

Would using Barbican to provide an API to manage the certificates make
more sense for our deployment tooling? This could be useful for both
undercloud and overcloud cases.

As for the rest of this, how invasive is the implementation of
FreeIPA.? Is this something that we can layer on top of an existing
deployment such that users wishing to use FreeIPA can opt-in.


Now, I know a lot of people have an allergic reaction to some, maybe
all, of these technologies. They should not be required to be running
in
a development or testbed setup.  But we need to make it possible to
secure an end deployment, and FreeIPA was designed explicitly for
these
kinds of distributed applications.  Here is what I would like to
implement.

Assuming that the Undercloud is installed on a physical machine, we
want
to treat the FreeIPA server as a managed service of the undercloud
that
is then consumed by the rest of the overcloud. Right now, there are
conflicts for some ports (8080 used by both swift and Dogtag) that
prevent a drop-in run of the server on the undercloud
controller.  Even
if we could deconflict, there is a possible battle between Keystone
and
the FreeIPA server on the undercloud.  So, while I would like to see
the
ability to run the FreeIPA server on the Undercloud machine
eventuall, I
think a more realistic deployment is to build a separate virtual
machine, parallel to the overcloud controller, and install FreeIPA
there. I've been able to modify Tripleo Quickstart to provision this
VM.

I was also able to run FreeIPA in a container on the undercloud
machine,
but this is, I think, not how we want to migrate to a container
based
strategy. It should be more deliberate.


While the ideal setup would be to install the IPA layer first, and
create service users in there, this produces a different install
path
between with-FreeIPA and without-FreeIPA. Thus, I suspect the right
approach is to run the overcloud deploy, then "harden" the
deployment
with the FreeIPA steps.


The IdM team did just this last summer in preparing for the Tokyo
summit, using Ansible and Packstack.  The Rippowam project
https://github.com/admiyo/rippowam was able to fully lock down a
Packstack based install.  I'd like to reuse as much of Rippowam as
possible, but called from Heat Templates as part of an overcloud
deploy.  I do not really want to re implement Rippowam in Puppet.

As we are using Puppet for our configuration I think this is currently
a requirement. There are many good puppet examples out there of various
servers and a quick google search showed some IPA modules are available
as well.

I think most TripleO users are quite happy in using puppet modules for
configuration in that the puppet openstack modules are quite mature and
well tested. Making a one-off exception for FreeIPA at this point
doesn't make sense to me.


What about calling an ansible playbook from a puppet module?


So, big question: is Heat->ansible (instead of Puppet) for an
overcloud
deployment an acceptable path?  We are talking Ansible 1.0
Playbooks,
which should be relatively straightforward ports to 2.0 when the time
comes.

Thus, the sequence would be:

1. Run existing overcloud deploy steps.
2. Install IPA server on the allocated VM
3. Register the compute nodes and the controller as IPA clients
4. Convert service users over to LDAP backed services, complete with
necessary kerberos steps to do password-less authentication.
5. Register all web services with IPA and allocate X509 certificates
for
HTTPS.
6. Set up Host based access control (HBAC) rules for SSH access to
overcloud machines.


When we did the Rippowam demo, we used the Proton driver and
Kerberos
for securing the message broker.  Since Rabbit seems to be the tool
of
choice,  we would use X509 authentication and TLS for
encryption.  ACLs,
for now, would stay in the 

Re: [openstack-dev] [puppet-keystone] Setting additional config options:

2016-03-29 Thread Rich Megginson

On 03/29/2016 04:19 PM, Adam Young wrote:


Somewhere in here:

http://git.openstack.org/cgit/openstack/puppet-keystone/tree/spec/classes/keystone_spec.rb 



spec is for the rspec unit testing.  Do you mean 
http://git.openstack.org/cgit/openstack/puppet-keystone/tree/manifests/init.pp 
?




I need to set these options:


admin_project_name
admin_project_domain_name

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n450 

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n453 




If they are unset, we should default them to 'admin' and 'Default' on new
installs, and leave them blank on old installs.


Can anyone point me in the right direction?


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova hooks - document & test or deprecate?

2016-02-29 Thread Rich Megginson

On 02/29/2016 12:19 PM, Chris Friesen wrote:

On 02/29/2016 12:22 PM, Daniel P. Berrange wrote:


There's three core scenarios for hooks

  1. Modifying some aspect of the Nova operation
  2. Triggering an external action synchronously to some Nova operation
  3. Triggering an external action asynchronously to some Nova operation

The Rdo example is falling in scenario 1 since it is modifying the
injected files. I think this is is absolutely the kind of thing
we should explicitly *never* support. When external code can arbitrarily
modify some aspect of Nova operation we're in totally unchartered
territory as to the behaviour of Nova. To support that we'd need to
provide a stable internal API which is just not something we want to
tie ourselves into. I don't know just what the Rdo example is trying
to achieve, but whatever it is, it should be via some supportable API
and not a hook.,

Scenaris 2 and 3 are both valid to consider. Using the notifications
system gets as an asynchronous trigger mechanism, which is probably
fine for many scenarios.  The big question is whether there's a
compelling need for scenario two, where the external action blocks
execution of the Nova operation until it has completed its hook.


Even in the case of scenario two it is possible in some cases to use a 
proxy to intercept the HTTP request, take action, and then forward it 
or reject it as appropriate.


I think the real question is whether there's a need to trigger an 
external action synchronously from down in the guts of the nova code.


The hooks do the following: 
https://github.com/rcritten/rdo-vm-factory/blob/use-centos/rdo-ipa-nova/novahooks.py#L271


We need to generate a token (ipaotp) and call ipa host-add with that 
token _before_ the new machine has a chance to call ipa-client-install.  
We have to guarantee that the client cannot call ipa-client-install 
until we get back the response from ipa that the host has been added 
with the token.  Additionally, we store the token in an injected_file in 
the new machine, so the file can be removed as soon as possible.  We 
tried storing the token in the VM metadata, but there is apparently no 
way to delete it?  Can the machine do


curl -XDELETE http://168.254.169.254/openstack/latest/metadata?key=value ?

Using the build_instance.pre hook in Nova makes this simple and 
straightforward.  It's also relatively painless to use the 
network_info.post hook to handle the floating ip address assignment.


Is it possible to do the above using notifications without jumping 
through too many hoops?




Chris

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposing Cody Herriges part of Puppet OpenStack core

2015-12-08 Thread Rich Megginson

On 12/08/2015 09:49 AM, Emilien Macchi wrote:

Hi,

Back in "old days", Cody was already core on the modules, when they were
hosted by Puppetlabs namespace.
His contributions [1] are very valuable to the group:
* strong knowledge on Puppet and all dependencies in general.
* very helpful to debug issues related to Puppet core or dependencies
(beaker, etc).
* regular attendance to our weekly meeting
* pertinent reviews
* very understanding of our coding style

I would like to propose having him back part of our core team.
As usual, we need to vote.


+1


Thanks,

[1]
http://stackalytics.openstack.org/?metric=commits=all_type=all_id=ody-cat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposing Sofer Athlan Guyot part of puppet-keystone core team

2015-12-03 Thread Rich Megginson

On 12/03/2015 01:05 PM, Emilien Macchi wrote:

Hi,

For some months, Puppet OpenStack group has been very lucky to have
Sofer working with us.
He became a huge contributor to puppet-keystone, he knows the module
perfectly and wrote insane amount of code recently, to bring new
features that our community requested (some stats: [1]).
He's always here to help on IRC and present during our weekly meetings.

Core contributors, please vote if you agree to add him part of
puppet-keystone core team.


+1 - chem has been doing excellent work!



Thanks,

[1]
http://stackalytics.openstack.org/?user_id=sofer-athlan-guyot=all=loc


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] build_instance pre hook cannot set injected_files for new instance

2015-11-20 Thread Rich Megginson

On 11/19/2015 10:34 AM, Rich Megginson wrote:
I have some code that uses the build_instance pre hook to set 
injected_files in the new instance.  With the kilo code, the argv[7] 
was passed as [] - so I could append/extend this value to add more 
injected_files.  With the latest code, this is passed as None, so I 
can't set it.  How can I pass injected_files in a build_instance pre 
hook with the latest code/liberty? 


I have filed bug https://bugs.launchpad.net/nova/+bug/1518321 to track 
this issue.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] build_instance pre hook cannot set injected_files for new instance

2015-11-19 Thread Rich Megginson
I have some code that uses the build_instance pre hook to set 
injected_files in the new instance.  With the kilo code, the argv[7] was 
passed as [] - so I could append/extend this value to add more 
injected_files.  With the latest code, this is passed as None, so I 
can't set it.  How can I pass injected_files in a build_instance pre 
hook with the latest code/liberty?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Creating puppet-keystone-core and proposing Richard Megginson core-reviewer

2015-11-02 Thread Rich Megginson

On 10/31/2015 08:55 AM, Emilien Macchi wrote:

At the Summit we discussed about scaling-up our team.
We decided to investigate the creation of sub-groups specific to our 
modules that would have +2 power.


I would like to start with puppet-keystone:
https://review.openstack.org/240666

And propose Richard Megginson part of this group.

Rich is leading puppet-keystone work since our Juno cycle. Without his 
leadership and skills, I'm not sure we would have Keystone v3 support 
in our modules.
He's a good Puppet reviewer and takes care of backward compatibility. 
He also has strong knowledge at how Keystone works. He's always 
willing to lead our roadmap regarding identity deployment in OpenStack.


Having him on-board is for us an awesome opportunity to be ahead of 
other deployments tools and supports many features in Keystone that 
real deployments actually need.


I would like to propose him part of the new puppet-keystone-core group.

Thank you Rich for your work, which is very appreciated.


Thanks Emilien, and everyone else for your support!


--
Emilien Macchi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][Fuel] Using Native Ruby Client for Openstack Providers

2015-10-23 Thread Rich Megginson

On 10/22/2015 11:09 PM, Gilles Dubreuil wrote:


On 21/10/15 00:56, Sofer Athlan-Guyot wrote:

Gilles Dubreuil  writes:


On 14/10/15 17:15, Gilles Dubreuil wrote:


On 14/10/15 10:36, Colleen Murphy wrote:


On Tue, Oct 13, 2015 at 6:13 AM, Vladimir Kuklin > wrote:

 Puppetmaster and Fuelers,

 Last week I mentioned that I would like to bring the theme of using
 native ruby OpenStack client and use it within the providers.

 Emilien told me that I had already been late and the decision was
 made that puppet-openstack decided to not work with Aviator based on
 [0]. I went through this thread and did not find any unresolvable
 issues with using Aviator in comparison with potential benefits it
 could have brought up.

 What I saw actually was like that:

 * Pros

 1) It is a native ruby client
 2) We can import it in puppet and use all the power of Ruby
 3) We will not need to have a lot of forks/execs for puppet
 4) You are relying on negotiated and structured output provided by
 API (JSON) instead of introducing workarounds for client output like [1]

 * Cons

 1) Aviator is not actively supported
 2) Aviator does not track all the upstream OpenStack features while
 native OpenStack client does support them
 3) Ruby community is not really interested in OpenStack (this one is
 arguable, I think)

 * Proposed solution

 While I completely understand all the cons against using Aviator
 right now, I see that Pros above are essential enough to change our
 mind and invest our own resources into creating really good
 OpenStack binding in Ruby.
 Some are saying that there is not so big involvement of Ruby into
 OpenStack. But we are actually working with Puppet/Ruby and are
 invloved into community. So why should not we own this by ourselves
 and lead by example here?

 I understand that many of you do already have a lot of things on
 their plate and cannot or would not want to support things like
 additional library when native OpenStack client is working
 reasonably well for you. But if I propose the following scheme to
 get support of native Ruby client for OpenStack:

 1) we (community) have these resources (speaking of the company I am
 working for, we at Mirantis have a set of guys who could be very
 interested in working on Ruby client for OpenStack)
 2) we gradually improve Aviator code base up to the level that it
 eliminates issues that are mentioned in  'Cons' section
 3) we introduce additional set of providers and allow users and
 operators to pick whichever they want
 4) we leave OpenStackClient default one

 Would you support it and allow such code to be merged into upstream
 puppet-openstack modules?


 [0] 
https://groups.google.com/a/puppetlabs.com/forum/#!searchin/puppet-openstack/aviator$20openstackclient/puppet-openstack/GJwDHNAFVYw/ayN4cdg3EW0J
 [1] 
https://github.com/openstack/puppet-swift/blob/master/lib/puppet/provider/swift_ring_builder.rb#L21-L86
 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04 
 +7 (926) 702-39-68 
 Skype kuklinvv
 35bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com 
 www.mirantis.ru 
 vkuk...@mirantis.com 


The scale-tipping reason we went with python-openstackclient over the
Aviator library was that at the same time we were trying to switch, we
were also developing keystone v3 features and we could only get those
features from python-openstackclient.

For the first two pros you listed, I'm not convinced they're really
pros. Puppet types and providers are actually extremely well-suited to
shelling out to command-line clients. There are existing, documented
puppet APIs to do it and we get automatic debug output with it. Nearly
every existing type and provider does this. It is not well-suited to
call out to other non-standard ruby libraries because they need to be
added as a dependency somehow, and doing this is not well-established in
puppet. There are basically two options to do this:

  1) Add a gem as a package resource and make sure the package resource
is called before any of the openstack resources. I could see this
working as an opt-in thing, but not as a default, for the same reason we
don't require our users to install pip libraries - there is less
security guarantees from pypi and rubygems than from distro packages,
plus corporate infrastructure may not allow pulling packages from these
types of sources. (I don't see this policy documented anywhere, this was
just something that's been instilled in me since I started working on
this team. If we 

Re: [openstack-dev] [puppet][Fuel] Using Native Ruby Client for Openstack Providers

2015-10-13 Thread Rich Megginson

On 10/13/2015 07:13 AM, Vladimir Kuklin wrote:

Puppetmaster and Fuelers,

Last week I mentioned that I would like to bring the theme of using 
native ruby OpenStack client and use it within the providers.


Emilien told me that I had already been late and the decision was made 
that puppet-openstack decided to not work with Aviator based on [0]. I 
went through this thread and did not find any unresolvable issues with 
using Aviator in comparison with potential benefits it could have 
brought up.


What I saw actually was like that:

* Pros

1) It is a native ruby client
2) We can import it in puppet and use all the power of Ruby
3) We will not need to have a lot of forks/execs for puppet


I think 1), 2), and 3) go together - that is, the reason why 1) and 2) 
are good is because of 3) - since aviator is native ruby, there is no 
need to fork/exec.  What other "power of Ruby" is there to be taken 
advantage of?


As for fork/exec, it remains to be seen that fork/exec is causing a 
performance problem.  Note that you can also run the openstackclient in 
"persistent" mode - that is, use it as a persistent pipe, which will 
read commands from stdin and output to stdout, which should alleviate 
much if not all of any performance problem caused by multiple 
fork/exec.  We just haven't investigated this route yet because it needs 
to be proven that fork/exec causes performance problems.


4) You are relying on negotiated and structured output provided by API 
(JSON) instead of introducing workarounds for client output like [1]


openstackclient can output JSON.



* Cons

1) Aviator is not actively supported


This is huge.

2) Aviator does not track all the upstream OpenStack features while 
native OpenStack client does support them


This is also huge.

3) Ruby community is not really interested in OpenStack (this one is 
arguable, I think)


* Proposed solution

While I completely understand all the cons against using Aviator right 
now, I see that Pros above are essential enough to change our mind and 
invest our own resources into creating really good OpenStack binding 
in Ruby.


I'm still not convinced.

Some are saying that there is not so big involvement of Ruby into 
OpenStack. But we are actually working with Puppet/Ruby and are 
invloved into community. So why should not we own this by ourselves 
and lead by example here?






I understand that many of you do already have a lot of things on their 
plate and cannot or would not want to support things like additional 
library when native OpenStack client is working reasonably well for 
you. But if I propose the following scheme to get support of native 
Ruby client for OpenStack:


1) we (community) have these resources (speaking of the company I am 
working for, we at Mirantis have a set of guys who could be very 
interested in working on Ruby client for OpenStack)
2) we gradually improve Aviator code base up to the level that it 
eliminates issues that are mentioned in  'Cons' section
3) we introduce additional set of providers and allow users and 
operators to pick whichever they want

4) we leave OpenStackClient default one

Would you support it and allow such code to be merged into upstream 
puppet-openstack modules?


I would be in favor of such a plan if 
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20151013 questions 
0.4.1-0.4.7 could be answered in the affirmative.





[0] 
https://groups.google.com/a/puppetlabs.com/forum/#!searchin/puppet-openstack/aviator$20openstackclient/puppet-openstack/GJwDHNAFVYw/ayN4cdg3EW0J 

[1] 
https://github.com/openstack/puppet-swift/blob/master/lib/puppet/provider/swift_ring_builder.rb#L21-L86

--
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru 
vkuk...@mirantis.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][Fuel] Using Native Ruby Client for Openstack Providers

2015-10-13 Thread Rich Megginson

On 10/13/2015 09:17 AM, Vladimir Kuklin wrote:

Rich

Thanks for your feedback - let me comment on a couple of things.

First of all, I do not think we have complete support for any action 
in OpenStack client right now - we still need to rely on 
neutronclient, glanceclient, etc.


Right.  But those are all being actively maintained, and will have to 
add support for Keystone v3 in order to take advantage of Keystone v3 
features if desired by the clients of those services.




Secondly, regarding Ruby power - this is about any good programming 
language, not about Ruby - I can simply mention better exception 
handling as you would not need to parse the output and generate your 
own exceptions - this makes it easier to support the whole set of 
providers. As I mentioned earlier, we do not have perfect exception 
handling for intermittent operational issues.


"As I mentioned earlier" - not sure to what you are referring here. Can 
you please explain how you could do exception handling better with 
native ruby than with openstackclient output?  I mean, you still have to 
"parse" the return value of the http request, to get the code, the error 
message, and any returned values.




Finally, I understand that you do not see metrics. Although, it seems 
to me absolutely obvious that fork/exec is going to be the problem 
here, that's OK, I will work on that and come up with quantitative 
analysis.


It does appear obvious that getting rid of fork/exec will speed up 
puppet runs.  But it is not obvious how much that speed up will be, and 
it is not obvious about the cost of that vs. the current code, and 
cost/performance vs. using openstackclient in "persistent" mode.





On Tue, Oct 13, 2015 at 5:18 PM, Rich Megginson <rmegg...@redhat.com 
<mailto:rmegg...@redhat.com>> wrote:


On 10/13/2015 07:13 AM, Vladimir Kuklin wrote:

Puppetmaster and Fuelers,

Last week I mentioned that I would like to bring the theme of
using native ruby OpenStack client and use it within the providers.

Emilien told me that I had already been late and the decision was
made that puppet-openstack decided to not work with Aviator based
on [0]. I went through this thread and did not find any
unresolvable issues with using Aviator in comparison with
potential benefits it could have brought up.

What I saw actually was like that:

* Pros

1) It is a native ruby client
2) We can import it in puppet and use all the power of Ruby
3) We will not need to have a lot of forks/execs for puppet


I think 1), 2), and 3) go together - that is, the reason why 1)
and 2) are good is because of 3) - since aviator is native ruby,
there is no need to fork/exec. What other "power of Ruby" is there
to be taken advantage of?

As for fork/exec, it remains to be seen that fork/exec is causing
a performance problem.  Note that you can also run the
openstackclient in "persistent" mode - that is, use it as a
persistent pipe, which will read commands from stdin and output to
stdout, which should alleviate much if not all of any performance
problem caused by multiple fork/exec.  We just haven't
investigated this route yet because it needs to be proven that
fork/exec causes performance problems.


4) You are relying on negotiated and structured output provided
by API (JSON) instead of introducing workarounds for client
output like [1]


openstackclient can output JSON.



* Cons

1) Aviator is not actively supported


This is huge.


2) Aviator does not track all the upstream OpenStack features
while native OpenStack client does support them


This is also huge.


3) Ruby community is not really interested in OpenStack (this one
is arguable, I think)

* Proposed solution

While I completely understand all the cons against using Aviator
right now, I see that Pros above are essential enough to change
our mind and invest our own resources into creating really good
OpenStack binding in Ruby.


I'm still not convinced.


Some are saying that there is not so big involvement of Ruby into
OpenStack. But we are actually working with Puppet/Ruby and are
invloved into community. So why should not we own this by
ourselves and lead by example here?






I understand that many of you do already have a lot of things on
their plate and cannot or would not want to support things like
additional library when native OpenStack client is working
reasonably well for you. But if I propose the following scheme to
get support of native Ruby client for OpenStack:

1) we (community) have these resources (speaking of the company I
am working for, we at Mirantis have a set of guys who could be
very interested in working on Ruby client for OpenStack)
2) we gradually improve Aviator code base up to the le

Re: [openstack-dev] [puppet][Fuel] Using Native Ruby Client for Openstack Providers

2015-10-13 Thread Rich Megginson

On 10/13/2015 09:22 AM, Clayton O'Neill wrote:
I agree that ideally, using a native ruby library would be better, but 
I also share Matt's concern.  We'd need a commitment from more than 
one person to maintain the library if we went that route.


I think the big advantages I see with the ruby client would be:

  * Potentially better performance



But how much, and is it worth the cost, and also worth the cost vs. 
using openstackclient in "persistent" mode.



  * Faster turn around time for enhancements/bug fixes.  My concern
here is that we're currently limited by how quickly distros pick
up new versions of OpenStack Client.



IMO this is the biggest problem we have had with using openstackclient - 
being gated by distros, and having to wait months, in some cases, to use 
features supported by the services, which we could have used immediately 
using the API directly.




I think if we did end up using a ruby library, we'd also want to make 
sure it was not only vendored, but also usable independently, to 
increase the audience.


. . . and then are we also going to be gated by the distros in the same 
way, waiting for months to get an update to aviator?




On Tue, Oct 13, 2015 at 8:16 AM, Vladimir Kuklin > wrote:


Matt

Thanks for your input. So, I mentioned the following - Fuel guys
can contribute into Ruby client for OpenStack as we are also
interested in making it faster. That's why I asked for support in
case we invest substantial effort (as we do not want to waste our
time on things that will not land into upstream) and asked if the
approach that I proposed is OK with you.

On Tue, Oct 13, 2015 at 6:07 PM, Matt Fischer
> wrote:

From a technical point of view, not forking and using a native
library makes total sense. I think it would likely be faster
and certainly cleaner than parsing output. Unfortunately I
don't think that we have the resources to actively maintain
the library. I think that's the main blocker for me.

On Tue, Oct 13, 2015 at 7:13 AM, Vladimir Kuklin
> wrote:

Puppetmaster and Fuelers,

Last week I mentioned that I would like to bring the theme
of using native ruby OpenStack client and use it within
the providers.

Emilien told me that I had already been late and the
decision was made that puppet-openstack decided to not
work with Aviator based on [0]. I went through this thread
and did not find any unresolvable issues with using
Aviator in comparison with potential benefits it could
have brought up.

What I saw actually was like that:

* Pros

1) It is a native ruby client
2) We can import it in puppet and use all the power of Ruby
3) We will not need to have a lot of forks/execs for puppet
4) You are relying on negotiated and structured output
provided by API (JSON) instead of introducing workarounds
for client output like [1]

* Cons

1) Aviator is not actively supported
2) Aviator does not track all the upstream OpenStack
features while native OpenStack client does support them
3) Ruby community is not really interested in OpenStack
(this one is arguable, I think)

* Proposed solution

While I completely understand all the cons against using
Aviator right now, I see that Pros above are essential
enough to change our mind and invest our own resources
into creating really good OpenStack binding in Ruby.
Some are saying that there is not so big involvement of
Ruby into OpenStack. But we are actually working with
Puppet/Ruby and are invloved into community. So why should
not we own this by ourselves and lead by example here?

I understand that many of you do already have a lot of
things on their plate and cannot or would not want to
support things like additional library when native
OpenStack client is working reasonably well for you. But
if I propose the following scheme to get support of native
Ruby client for OpenStack:

1) we (community) have these resources (speaking of the
company I am working for, we at Mirantis have a set of
guys who could be very interested in working on Ruby
client for OpenStack)
2) we gradually improve Aviator code base up to the level
that it eliminates issues that are mentioned in  'Cons'
section
3) we introduce additional set of providers 

Re: [openstack-dev] [puppet][Fuel] OpenstackLib Client Provider Better Exception Handling

2015-10-13 Thread Rich Megginson

On 10/13/2015 12:57 PM, Emilien Macchi wrote:


On 10/08/2015 07:38 AM, Vladimir Kuklin wrote:
[...]

* Proposed solution

Introduce a library of exception handling methods which should be the
same for all puppet openstack providers as these exceptions seem to be
generic. Then, for each of the providers we can introduce
provider-specific libraries that will inherit from this one.

Our mos-puppet team could add this into their backlog and could work on
that in upstream or downstream and propose it upstream.

What do you think on that, puppet folks?

This is excellent feedback from how modules work in Fuel and I'm sure
you're not alone, everybody deploying OpenStack with Puppet is hitting
these issues.

You might want to refactor [1] and manage more use-cases.
If you plan to work on it, I would suggest to use our upstream backlog
[2] so we can involve the whole group in that work.

[1]
https://github.com/openstack/puppet-openstacklib/blob/master/lib/puppet/provider/openstack.rb
[2] https://trello.com/b/4X3zxWRZ/on-going-effort


If the issue is that openstackclient output is hard to parse, we should 
tell openstackclient to output JSON - 
https://bugs.launchpad.net/puppet-openstacklib/+bug/1479387




Thanks for taking care of that,


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][Fuel] Using Native Ruby Client for Openstack Providers

2015-10-13 Thread Rich Megginson

On 10/13/2015 01:49 PM, Dean Troyer wrote:
On Tue, Oct 13, 2015 at 10:33 AM, Rich Megginson <rmegg...@redhat.com 
<mailto:rmegg...@redhat.com>> wrote:



I think if we did end up using a ruby library, we'd also want to
make sure it was not only vendored, but also usable
independently, to increase the audience.


. . . and then are we also going to be gated by the distros in the
same way, waiting for months to get an update to aviator?


Yes.  Check out the recent thread (again!) about urllib3 vendored 
inside Python requests.  Distros will unvendor that for you and you 
will have again the same problem in a different place.


Rich, the daemon/persistent mode has been low on my radar for a bit 
and I think needs a bit of work to be fully usable the way I think you 
envision.  Let us know if that becomes a higher priority as we're 
currently focusing on fleshing out the in-repo API support and 
modernizing our auth (switching to keystoneauth1).


Ok.  It's unclear right now when/if this will be needed by puppet.



dt

--

Dean Troyer
dtro...@gmail.com <mailto:dtro...@gmail.com>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] WARNING - breaking backwards compatibility in puppet-keystone

2015-10-07 Thread Rich Megginson

On 10/07/2015 03:54 PM, Matt Fischer wrote:


I thought the agreement was that default would be assumed so that we 
didn't break backwards compatibility?




puppet-heat had already started using domains, and had already written 
their code based on the implementation where an unqualified name was 
allowed if it was unique among all domains.  That code will need to 
change to specify the domain.  Any other code that was already using 
domains (which I'm assuming is hardly any, if at all) will also need to 
change.



On Oct 7, 2015 10:35 AM, "Rich Megginson" <rmegg...@redhat.com 
<mailto:rmegg...@redhat.com>> wrote:


tl;dr You must specify a domain when using domain scoped resources.

If you are using domains with puppet-keystone, there is a proposed
patch that will break backwards compatibility.

https://review.openstack.org/#/c/226624/ Replace indirection calls

"Indirection calls are replaced with #fetch_project and
#fetch_user methods
using python-openstackclient (OSC).

Also removes the assumption that if a resource is unique within a
domain space
then the domain doesn't have to be specified."

It is the last part which is causing backwards compatibility to be
broken.  This patch requires that a domain scoped resource _must_
be qualified with the domain name if _not_ in the 'Default'
domain.  Previously, you did not have to qualify a resource name
with the domain if the name was unique in _all_ domains.  The
problem was this code relied heavily on puppet indirection, and
was complex and difficult to maintain.  We removed it in favor of
a very simple implementation: if the name is not qualified with a
domain, it must be in the 'Default' domain.

Here is an example from puppet-heat - the 'heat_admin' user has
been created in the 'heat_stack' domain previously.

ensure_resource('keystone_user_role', 'heat_admin@::heat_stack", {
  'roles' => ['admin'],
})

This means "assign the user 'heat_admin' in the unspecified domain
to have the domain scoped role 'admin' in the 'heat_stack'
domain". It is a domain scoped role, not a project scoped role,
because in "@::heat_stack" there is no project, only a domain.
Note that the domain for the 'heat_admin' user is unspecified. In
order to specify the domain you must use
'heat_admin::heat_stack@::heat_stack'. This is the recommended fix
- to fully qualify the user + domain.

The breakage manifests itself like this, from the logs::

2015-10-02 06:07:39.574 | Debug: Executing '/usr/bin/openstack
user show --format shell heat_admin --domain Default'
2015-10-02 06:07:40.505 | Error:
/Stage[main]/Heat::Keystone::Domain/Keystone_user_role[heat_admin@::heat]:
Could not evaluate: No user heat_admin with domain  found

This is from the keystone_user_role code. Since the role user was
specified as 'heat_admin' with no domain, the keystone_user_role
code looks for 'heat_admin' in the 'Default' domain and can't find
it, and raises an error.

Right now, the only way to specify the domain is by adding
'::domain_name' to the user name, as
'heat_admin::heat_stack@::heat_stack'.  Sofer is working on a way
to add the domain name as a parameter of keystone_user_role -
https://review.openstack.org/226919 - so in the near future you
will be able to specify the resource like this:


ensure_resource('keystone_user_role', 'heat_admin@::heat_stack", {
  'roles' => ['admin'],
  'user_domain_name' => 'heat_stack',
})


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] WARNING - breaking backwards compatibility in puppet-keystone

2015-10-07 Thread Rich Megginson

tl;dr You must specify a domain when using domain scoped resources.

If you are using domains with puppet-keystone, there is a proposed patch 
that will break backwards compatibility.


https://review.openstack.org/#/c/226624/ Replace indirection calls

"Indirection calls are replaced with #fetch_project and #fetch_user methods
using python-openstackclient (OSC).

Also removes the assumption that if a resource is unique within a domain 
space

then the domain doesn't have to be specified."

It is the last part which is causing backwards compatibility to be 
broken.  This patch requires that a domain scoped resource _must_ be 
qualified with the domain name if _not_ in the 'Default' domain.  
Previously, you did not have to qualify a resource name with the domain 
if the name was unique in _all_ domains.  The problem was this code 
relied heavily on puppet indirection, and was complex and difficult to 
maintain.  We removed it in favor of a very simple implementation: if 
the name is not qualified with a domain, it must be in the 'Default' domain.


Here is an example from puppet-heat - the 'heat_admin' user has been 
created in the 'heat_stack' domain previously.


ensure_resource('keystone_user_role', 'heat_admin@::heat_stack", {
  'roles' => ['admin'],
})

This means "assign the user 'heat_admin' in the unspecified domain to 
have the domain scoped role 'admin' in the 'heat_stack' domain". It is a 
domain scoped role, not a project scoped role, because in 
"@::heat_stack" there is no project, only a domain. Note that the domain 
for the 'heat_admin' user is unspecified. In order to specify the domain 
you must use 'heat_admin::heat_stack@::heat_stack'. This is the 
recommended fix - to fully qualify the user + domain.


The breakage manifests itself like this, from the logs::

2015-10-02 06:07:39.574 | Debug: Executing '/usr/bin/openstack user 
show --format shell heat_admin --domain Default'
2015-10-02 06:07:40.505 | Error: 
/Stage[main]/Heat::Keystone::Domain/Keystone_user_role[heat_admin@::heat]: 
Could not evaluate: No user heat_admin with domain  found


This is from the keystone_user_role code. Since the role user was 
specified as 'heat_admin' with no domain, the keystone_user_role code 
looks for 'heat_admin' in the 'Default' domain and can't find it, and 
raises an error.


Right now, the only way to specify the domain is by adding 
'::domain_name' to the user name, as 
'heat_admin::heat_stack@::heat_stack'.  Sofer is working on a way to add 
the domain name as a parameter of keystone_user_role - 
https://review.openstack.org/226919 - so in the near future you will be 
able to specify the resource like this:



ensure_resource('keystone_user_role', 'heat_admin@::heat_stack", {
  'roles' => ['admin'],
  'user_domain_name' => 'heat_stack',
})

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-10-07 Thread Rich Megginson

On 10/07/2015 09:08 AM, Sofer Athlan-Guyot wrote:

Rich Megginson <rmegg...@redhat.com> writes:


On 10/06/2015 02:36 PM, Sofer Athlan-Guyot wrote:

Rich Megginson <rmegg...@redhat.com> writes:


On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:

Gilles Dubreuil <gil...@redhat.com> writes:


On 30/09/15 03:43, Rich Megginson wrote:

On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:

On 15/09/15 19:55, Sofer Athlan-Guyot wrote:

Gilles Dubreuil <gil...@redhat.com> writes:


On 15/09/15 06:53, Rich Megginson wrote:

On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:

Hi,

Gilles Dubreuil <gil...@redhat.com> writes:


A. The 'composite namevar' approach:

keystone_tenant {'projectX::domainY': ... }
  B. The 'meaningless name' approach:

   keystone_tenant {'myproject': name='projectX',
domain=>'domainY',
...}

Notes:
  - Actually using both combined should work too with the domain
supposedly overriding the name part of the domain.
  - Please look at [1] this for some background between the two
approaches:

The question
-
Decide between the two approaches, the one we would like to
retain for
puppet-keystone.

Why it matters?
---
1. Domain names are mandatory in every user, group or project.
Besides
the backward compatibility period mentioned earlier, where no domain
means using the default one.
2. Long term impact
3. Both approaches are not completely equivalent which different
consequences on the future usage.

I can't see why they couldn't be equivalent, but I may be missing
something here.

I think we could support both.  I don't see it as an either/or
situation.


4. Being consistent
5. Therefore the community to decide

Pros/Cons
--
A.

I think it's the B: meaningless approach here.


   Pros
 - Easier names

That's subjective, creating unique and meaningful name don't look
easy
to me.

The point is that this allows choice - maybe the user already has some
naming scheme, or wants to use a more "natural" meaningful name -
rather
than being forced into a possibly "awkward" naming scheme with "::"

  keystone_user { 'heat domain admin user':
name => 'admin',
domain => 'HeatDomain',
...
  }

  keystone_user_role {'heat domain admin user@::HeatDomain':
roles => ['admin']
...
  }


   Cons
 - Titles have no meaning!

They have meaning to the user, not necessarily to Puppet.


 - Cases where 2 or more resources could exists

This seems to be the hardest part - I still cannot figure out how
to use
"compound" names with Puppet.


 - More difficult to debug

More difficult than it is already? :P


 - Titles mismatch when listing the resources (self.instances)

B.
   Pros
 - Unique titles guaranteed
 - No ambiguity between resource found and their title
   Cons
 - More complicated titles
My vote

I would love to have the approach A for easier name.
But I've seen the challenge of maintaining the providers behind the
curtains and the confusion it creates with name/titles and when
not sure
about the domain we're dealing with.
Also I believe that supporting self.instances consistently with
meaningful name is saner.
Therefore I vote B

+1 for B.

My view is that this should be the advertised way, but the other
method
(meaningless) should be there if the user need it.

So as far as I'm concerned the two idioms should co-exist.  This
would
mimic what is possible with all puppet resources.  For instance
you can:

   file { '/tmp/foo.bar': ensure => present }

and you can

   file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
present }

The two refer to the same resource.

Right.


I disagree, using the name for the title is not creating a composite
name. The latter requires adding at least another parameter to be part
of the title.

Also in the case of the file resource, a path/filename is a unique
name,
which is not the case of an Openstack user which might exist in several
domains.

I actually added the meaningful name case in:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html


But that doesn't work very well because without adding the domain to
the
name, the following fails:

keystone_tenant {'project_1': domain => 'domain_A', ...}
keystone_tenant {'project_1': domain => 'domain_B', ...}

And adding the domain makes it a de-facto 'composite name'.

I agree that my example is not similar to what the keystone provider has
to do.  What I wanted to point out is that user in puppet should be used
to have this kind of *interface*, one where your put something
meaningful in the title and one where you put something meaningless.
The fact that the meaningful one is a compound one shouldn't matter to
the user.


There is a big blocker of making use of domain name as parameter.
The issu

Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-10-06 Thread Rich Megginson

On 10/06/2015 02:36 PM, Sofer Athlan-Guyot wrote:

Rich Megginson <rmegg...@redhat.com> writes:


On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:

Gilles Dubreuil <gil...@redhat.com> writes:


On 30/09/15 03:43, Rich Megginson wrote:

On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:

On 15/09/15 19:55, Sofer Athlan-Guyot wrote:

Gilles Dubreuil <gil...@redhat.com> writes:


On 15/09/15 06:53, Rich Megginson wrote:

On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:

Hi,

Gilles Dubreuil <gil...@redhat.com> writes:


A. The 'composite namevar' approach:

   keystone_tenant {'projectX::domainY': ... }
 B. The 'meaningless name' approach:

  keystone_tenant {'myproject': name='projectX',
domain=>'domainY',
...}

Notes:
 - Actually using both combined should work too with the domain
supposedly overriding the name part of the domain.
 - Please look at [1] this for some background between the two
approaches:

The question
-
Decide between the two approaches, the one we would like to
retain for
puppet-keystone.

Why it matters?
---
1. Domain names are mandatory in every user, group or project.
Besides
the backward compatibility period mentioned earlier, where no domain
means using the default one.
2. Long term impact
3. Both approaches are not completely equivalent which different
consequences on the future usage.

I can't see why they couldn't be equivalent, but I may be missing
something here.

I think we could support both.  I don't see it as an either/or
situation.


4. Being consistent
5. Therefore the community to decide

Pros/Cons
--
A.

I think it's the B: meaningless approach here.


  Pros
- Easier names

That's subjective, creating unique and meaningful name don't look
easy
to me.

The point is that this allows choice - maybe the user already has some
naming scheme, or wants to use a more "natural" meaningful name -
rather
than being forced into a possibly "awkward" naming scheme with "::"

 keystone_user { 'heat domain admin user':
   name => 'admin',
   domain => 'HeatDomain',
   ...
 }

 keystone_user_role {'heat domain admin user@::HeatDomain':
   roles => ['admin']
   ...
 }


  Cons
- Titles have no meaning!

They have meaning to the user, not necessarily to Puppet.


- Cases where 2 or more resources could exists

This seems to be the hardest part - I still cannot figure out how
to use
"compound" names with Puppet.


- More difficult to debug

More difficult than it is already? :P


- Titles mismatch when listing the resources (self.instances)

B.
  Pros
- Unique titles guaranteed
- No ambiguity between resource found and their title
  Cons
- More complicated titles
My vote

I would love to have the approach A for easier name.
But I've seen the challenge of maintaining the providers behind the
curtains and the confusion it creates with name/titles and when
not sure
about the domain we're dealing with.
Also I believe that supporting self.instances consistently with
meaningful name is saner.
Therefore I vote B

+1 for B.

My view is that this should be the advertised way, but the other
method
(meaningless) should be there if the user need it.

So as far as I'm concerned the two idioms should co-exist.  This
would
mimic what is possible with all puppet resources.  For instance
you can:

  file { '/tmp/foo.bar': ensure => present }

and you can

  file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
present }

The two refer to the same resource.

Right.


I disagree, using the name for the title is not creating a composite
name. The latter requires adding at least another parameter to be part
of the title.

Also in the case of the file resource, a path/filename is a unique
name,
which is not the case of an Openstack user which might exist in several
domains.

I actually added the meaningful name case in:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html


But that doesn't work very well because without adding the domain to
the
name, the following fails:

keystone_tenant {'project_1': domain => 'domain_A', ...}
keystone_tenant {'project_1': domain => 'domain_B', ...}

And adding the domain makes it a de-facto 'composite name'.

I agree that my example is not similar to what the keystone provider has
to do.  What I wanted to point out is that user in puppet should be used
to have this kind of *interface*, one where your put something
meaningful in the title and one where you put something meaningless.
The fact that the meaningful one is a compound one shouldn't matter to
the user.


There is a big blocker of making use of domain name as parameter.
The issue is the limitation of autorequire.

Because autorequire doesn't support any parameter other than the
resource type and expects the resource 

Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-30 Thread Rich Megginson

On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:

Gilles Dubreuil <gil...@redhat.com> writes:


On 30/09/15 03:43, Rich Megginson wrote:

On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:

On 15/09/15 19:55, Sofer Athlan-Guyot wrote:

Gilles Dubreuil <gil...@redhat.com> writes:


On 15/09/15 06:53, Rich Megginson wrote:

On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:

Hi,

Gilles Dubreuil <gil...@redhat.com> writes:


A. The 'composite namevar' approach:

  keystone_tenant {'projectX::domainY': ... }
B. The 'meaningless name' approach:

 keystone_tenant {'myproject': name='projectX',
domain=>'domainY',
...}

Notes:
- Actually using both combined should work too with the domain
supposedly overriding the name part of the domain.
- Please look at [1] this for some background between the two
approaches:

The question
-
Decide between the two approaches, the one we would like to
retain for
puppet-keystone.

Why it matters?
---
1. Domain names are mandatory in every user, group or project.
Besides
the backward compatibility period mentioned earlier, where no domain
means using the default one.
2. Long term impact
3. Both approaches are not completely equivalent which different
consequences on the future usage.

I can't see why they couldn't be equivalent, but I may be missing
something here.

I think we could support both.  I don't see it as an either/or
situation.


4. Being consistent
5. Therefore the community to decide

Pros/Cons
--
A.

I think it's the B: meaningless approach here.


 Pros
   - Easier names

That's subjective, creating unique and meaningful name don't look
easy
to me.

The point is that this allows choice - maybe the user already has some
naming scheme, or wants to use a more "natural" meaningful name -
rather
than being forced into a possibly "awkward" naming scheme with "::"

keystone_user { 'heat domain admin user':
  name => 'admin',
  domain => 'HeatDomain',
  ...
}

keystone_user_role {'heat domain admin user@::HeatDomain':
  roles => ['admin']
  ...
}


 Cons
   - Titles have no meaning!

They have meaning to the user, not necessarily to Puppet.


   - Cases where 2 or more resources could exists

This seems to be the hardest part - I still cannot figure out how
to use
"compound" names with Puppet.


   - More difficult to debug

More difficult than it is already? :P


   - Titles mismatch when listing the resources (self.instances)

B.
 Pros
   - Unique titles guaranteed
   - No ambiguity between resource found and their title
 Cons
   - More complicated titles
My vote

I would love to have the approach A for easier name.
But I've seen the challenge of maintaining the providers behind the
curtains and the confusion it creates with name/titles and when
not sure
about the domain we're dealing with.
Also I believe that supporting self.instances consistently with
meaningful name is saner.
Therefore I vote B

+1 for B.

My view is that this should be the advertised way, but the other
method
(meaningless) should be there if the user need it.

So as far as I'm concerned the two idioms should co-exist.  This
would
mimic what is possible with all puppet resources.  For instance
you can:

 file { '/tmp/foo.bar': ensure => present }

and you can

 file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
present }

The two refer to the same resource.

Right.


I disagree, using the name for the title is not creating a composite
name. The latter requires adding at least another parameter to be part
of the title.

Also in the case of the file resource, a path/filename is a unique
name,
which is not the case of an Openstack user which might exist in several
domains.

I actually added the meaningful name case in:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html


But that doesn't work very well because without adding the domain to
the
name, the following fails:

keystone_tenant {'project_1': domain => 'domain_A', ...}
keystone_tenant {'project_1': domain => 'domain_B', ...}

And adding the domain makes it a de-facto 'composite name'.

I agree that my example is not similar to what the keystone provider has
to do.  What I wanted to point out is that user in puppet should be used
to have this kind of *interface*, one where your put something
meaningful in the title and one where you put something meaningless.
The fact that the meaningful one is a compound one shouldn't matter to
the user.


There is a big blocker of making use of domain name as parameter.
The issue is the limitation of autorequire.

Because autorequire doesn't support any parameter other than the
resource type and expects the resource title (or a list of) [1].

So for instance, keystone_user requires the tenant project1 from
domain1, then the resource name must be 

Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-29 Thread Rich Megginson

On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:


On 15/09/15 19:55, Sofer Athlan-Guyot wrote:

Gilles Dubreuil <gil...@redhat.com> writes:


On 15/09/15 06:53, Rich Megginson wrote:

On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:

Hi,

Gilles Dubreuil <gil...@redhat.com> writes:


A. The 'composite namevar' approach:

 keystone_tenant {'projectX::domainY': ... }
   B. The 'meaningless name' approach:

keystone_tenant {'myproject': name='projectX', domain=>'domainY',
...}

Notes:
   - Actually using both combined should work too with the domain
supposedly overriding the name part of the domain.
   - Please look at [1] this for some background between the two
approaches:

The question
-
Decide between the two approaches, the one we would like to retain for
puppet-keystone.

Why it matters?
---
1. Domain names are mandatory in every user, group or project. Besides
the backward compatibility period mentioned earlier, where no domain
means using the default one.
2. Long term impact
3. Both approaches are not completely equivalent which different
consequences on the future usage.

I can't see why they couldn't be equivalent, but I may be missing
something here.

I think we could support both.  I don't see it as an either/or situation.


4. Being consistent
5. Therefore the community to decide

Pros/Cons
--
A.

I think it's the B: meaningless approach here.


Pros
  - Easier names

That's subjective, creating unique and meaningful name don't look easy
to me.

The point is that this allows choice - maybe the user already has some
naming scheme, or wants to use a more "natural" meaningful name - rather
than being forced into a possibly "awkward" naming scheme with "::"

   keystone_user { 'heat domain admin user':
 name => 'admin',
 domain => 'HeatDomain',
 ...
   }

   keystone_user_role {'heat domain admin user@::HeatDomain':
 roles => ['admin']
 ...
   }


Cons
  - Titles have no meaning!

They have meaning to the user, not necessarily to Puppet.


  - Cases where 2 or more resources could exists

This seems to be the hardest part - I still cannot figure out how to use
"compound" names with Puppet.


  - More difficult to debug

More difficult than it is already? :P


  - Titles mismatch when listing the resources (self.instances)

B.
Pros
  - Unique titles guaranteed
  - No ambiguity between resource found and their title
Cons
  - More complicated titles
My vote

I would love to have the approach A for easier name.
But I've seen the challenge of maintaining the providers behind the
curtains and the confusion it creates with name/titles and when not sure
about the domain we're dealing with.
Also I believe that supporting self.instances consistently with
meaningful name is saner.
Therefore I vote B

+1 for B.

My view is that this should be the advertised way, but the other method
(meaningless) should be there if the user need it.

So as far as I'm concerned the two idioms should co-exist.  This would
mimic what is possible with all puppet resources.  For instance you can:

file { '/tmp/foo.bar': ensure => present }

and you can

file { 'meaningless_id': name => '/tmp/foo.bar', ensure => present }

The two refer to the same resource.

Right.


I disagree, using the name for the title is not creating a composite
name. The latter requires adding at least another parameter to be part
of the title.

Also in the case of the file resource, a path/filename is a unique name,
which is not the case of an Openstack user which might exist in several
domains.

I actually added the meaningful name case in:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html

But that doesn't work very well because without adding the domain to the
name, the following fails:

keystone_tenant {'project_1': domain => 'domain_A', ...}
keystone_tenant {'project_1': domain => 'domain_B', ...}

And adding the domain makes it a de-facto 'composite name'.

I agree that my example is not similar to what the keystone provider has
to do.  What I wanted to point out is that user in puppet should be used
to have this kind of *interface*, one where your put something
meaningful in the title and one where you put something meaningless.
The fact that the meaningful one is a compound one shouldn't matter to
the user.


There is a big blocker of making use of domain name as parameter.
The issue is the limitation of autorequire.

Because autorequire doesn't support any parameter other than the
resource type and expects the resource title (or a list of) [1].

So for instance, keystone_user requires the tenant project1 from
domain1, then the resource name must be 'project1::domain1' because
otherwise there is no way to specify 'domain1':

autorequire(:keystone_tenant) do
   self[:tenant]
end


Not exactly.  See https://review.ope

Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-18 Thread Rich Megginson

On 09/16/2015 02:58 PM, Cody Herriges wrote:

I wrote my first composite namevar type a few years and ago and all the
magic is basically a single block of code inside the type...

https://github.com/puppetlabs/puppetlabs-java_ks/blob/master/lib/puppet/type/java_ks.rb#L145-L169

It basically boils down to these three things:

* Pick your namevars
(https://github.com/puppetlabs/puppetlabs-java_ks/blob/master/lib/puppet/type/java_ks.rb#L49-L64)
* Pick a delimiter
   - Personally I'd use @ here since we are talking about domains


Unfortunately, not only is "domains" an overloaded term, but "@" is 
already in use as a delimiter for keystone_user_role, and "@" is a legal 
character in usernames.



* Build your self.title_patterns method, accounting for delimited names
and arbitrary names.

While it looks like the README never got updated, the java_ks example
supports both meaningful titles and arbitrary ones.

java_ks { 'activemq_puppetca_keystore':
   ensure   => latest,
   name => 'puppetca',
   certificate  => '/etc/puppet/ssl/certs/ca.pem',
   target   => '/etc/activemq/broker.ks',
   password => 'puppet',
   trustcacerts => true,
}

java_ks { 'broker.example.com:/etc/activemq/broker.ks':
   ensure  => latest,
   certificate =>
'/etc/puppet/ssl/certs/broker.example.com.pe-internal-broker.pem',
   private_key =>
'/etc/puppet/ssl/private_keys/broker.example.com.pe-internal-broker.pem',
   password=> 'puppet',
}

You'll notice the first being an arbitrary title and the second
utilizing a ":" as a delimiter and omitting the name and target parameters.

Another code example can be found in the package type.

https://github.com/puppetlabs/puppet/blob/master/lib/puppet/type/package.rb#L268-L291.


Ok.  I've hacked a lib/puppet/type/keystone_tenant.rb to use name and 
domain with "isnamevar" and added a title_patterns like this:


  def self.title_patterns
identity = lambda {|x| x}
[
  [
/^(.+)::(.+)$/,
[
  [ :name, identity ],
  [ :domain, identity ]
]
  ],
  [
/^(.+)$/,
[
  [ :name, identity ]
]
  ]
]
  end

Then I hacked one of the simple rspec-puppet files to do this:

  let :pre_condition do
[
 'keystone_tenant { "tenant1": name => "tenant", domain => 
"domain1" }',

 'keystone_tenant { "tenant2": name => "tenant", domain => "domain2" }'
]
  end

because what I'm trying to do is not rely on the title of the resource, 
but to make the combination of 'name' + 'domain' the actual "name" of 
the resource.  This doesn't work.  This is the error I get running spec:


 Failure/Error: it { is_expected.to 
contain_package('python-keystone').with_ensure("present") }

 Puppet::Error:
   Puppet::Parser::AST::Resource failed with error ArgumentError: 
Cannot alias Keystone_tenant[tenant2] to ["tenant"]; resource 
["Keystone_tenant", "tenant"] already declared at line 3 on node 
unused.redhat.com
 # ./vendor/gems/puppet-3.8.2/lib/puppet/resource/catalog.rb:137:in 
`alias'
 # ./vendor/gems/puppet-3.8.2/lib/puppet/resource/catalog.rb:111:in 
`create_resource_aliases'
 # ./vendor/gems/puppet-3.8.2/lib/puppet/resource/catalog.rb:90:in 
`add_one_resource'


Is there any way to accomplish the above?  If not, please tell me now 
and put me out of my misery, and we can go back to the original plan of 
forcing everyone to use "::" in the resource titles and names.






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-14 Thread Rich Megginson

On 09/14/2015 03:26 PM, Sofer Athlan-Guyot wrote:

Rich Megginson <rmegg...@redhat.com> writes:


I think we could support both.  I don't see it as an either/or
situation.

+1


A.

I think it's the B: meaningless approach here.


Pros
  - Easier names

That's subjective, creating unique and meaningful name don't look easy
to me.

The point is that this allows choice - maybe the user already has some
naming scheme, or wants to use a more "natural" meaningful name -
rather than being forced into a possibly "awkward" naming scheme with
"::"

   keystone_user { 'heat domain admin user':
 name => 'admin',
 domain => 'HeatDomain',
 ...
   }

   keystone_user_role {'heat domain admin user@::HeatDomain':
 roles => ['admin']
 ...
   }


Thanks, I see the point.


Cons
  - Titles have no meaning!

They have meaning to the user, not necessarily to Puppet.


  - Cases where 2 or more resources could exists

This seems to be the hardest part - I still cannot figure out how to
use "compound" names with Puppet.

I don't get this point.  what is "2 or more resource could exists" and
how it relates to compound names ?


I would like to uniquely specify a resource by the _combination_ of the 
name + the domain.  For example:


  keystone_user { 'domain A admin user':
name => 'admin',
domain => 'domainA',
  }

  keystone_user { 'domain B admin user':
name => 'admin',
domain => 'domainB',
  }

Puppet doesn't like this - the value of the 'name' property of 
keystone_user is not unique throughout the manifest/catalog, even though 
both users are distinct and unique because they existing in different 
domains (and will have different UUIDs assigned by Keystone).


Gilles posted links to discussions about how to use isnamevar and 
title_patterns with Puppet Ruby providers, but I could not get it to 
work.  I was using Puppet 3.8 - perhaps it only works in Puppet 4.0 or 
later.  At any rate, this is an area for someone to do some research





  - More difficult to debug

More difficult than it is already? :P

require 'pry';binding.pry :)


Tried that on Fedora 22 (actually - debugger pry because pry by itself 
isn't a debugger, but a REPL inspector).  Didn't work.


Also doesn't help you when someone hands you a pile of Puppet logs . . .




As a side note, someone raised an issue about the delimiter being
hardcoded to "::".  This could be a property of the resource.  This
would enable the user to use weird name with "::" in it and assign a "/"
(for instance) to the delimiter property:

Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }

bar::is::cool is the name of the domain and foo::blah is the project.

That's a good idea.  Please file a bug for that.

Done there: https://bugs.launchpad.net/puppet-keystone/+bug/1495691


Thanks!




Finally
--
Thanks for reading that far!
To choose, please provide feedback with more pros/cons, examples and
your vote.

Thanks,
Gilles


PS:
[1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Bye,


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-14 Thread Rich Megginson

On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:

Hi,

Gilles Dubreuil  writes:


A. The 'composite namevar' approach:

keystone_tenant {'projectX::domainY': ... }
  B. The 'meaningless name' approach:

   keystone_tenant {'myproject': name='projectX', domain=>'domainY', ...}

Notes:
  - Actually using both combined should work too with the domain
supposedly overriding the name part of the domain.
  - Please look at [1] this for some background between the two approaches:

The question
-
Decide between the two approaches, the one we would like to retain for
puppet-keystone.

Why it matters?
---
1. Domain names are mandatory in every user, group or project. Besides
the backward compatibility period mentioned earlier, where no domain
means using the default one.
2. Long term impact
3. Both approaches are not completely equivalent which different
consequences on the future usage.

I can't see why they couldn't be equivalent, but I may be missing
something here.


I think we could support both.  I don't see it as an either/or situation.




4. Being consistent
5. Therefore the community to decide

Pros/Cons
--
A.

I think it's the B: meaningless approach here.


   Pros
 - Easier names

That's subjective, creating unique and meaningful name don't look easy
to me.


The point is that this allows choice - maybe the user already has some 
naming scheme, or wants to use a more "natural" meaningful name - rather 
than being forced into a possibly "awkward" naming scheme with "::"


  keystone_user { 'heat domain admin user':
name => 'admin',
domain => 'HeatDomain',
...
  }

  keystone_user_role {'heat domain admin user@::HeatDomain':
roles => ['admin']
...
  }




   Cons
 - Titles have no meaning!


They have meaning to the user, not necessarily to Puppet.


 - Cases where 2 or more resources could exists


This seems to be the hardest part - I still cannot figure out how to use 
"compound" names with Puppet.



 - More difficult to debug


More difficult than it is already? :P


 - Titles mismatch when listing the resources (self.instances)

B.
   Pros
 - Unique titles guaranteed
 - No ambiguity between resource found and their title
   Cons
 - More complicated titles
My vote

I would love to have the approach A for easier name.
But I've seen the challenge of maintaining the providers behind the
curtains and the confusion it creates with name/titles and when not sure
about the domain we're dealing with.
Also I believe that supporting self.instances consistently with
meaningful name is saner.
Therefore I vote B

+1 for B.

My view is that this should be the advertised way, but the other method
(meaningless) should be there if the user need it.

So as far as I'm concerned the two idioms should co-exist.  This would
mimic what is possible with all puppet resources.  For instance you can:

   file { '/tmp/foo.bar': ensure => present }

and you can

   file { 'meaningless_id': name => '/tmp/foo.bar', ensure => present }

The two refer to the same resource.


Right.



But, If that's indeed not possible to have them both, then I would keep
only the meaningful name.


As a side note, someone raised an issue about the delimiter being
hardcoded to "::".  This could be a property of the resource.  This
would enable the user to use weird name with "::" in it and assign a "/"
(for instance) to the delimiter property:

   Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }

bar::is::cool is the name of the domain and foo::blah is the project.


That's a good idea.  Please file a bug for that.




Finally
--
Thanks for reading that far!
To choose, please provide feedback with more pros/cons, examples and
your vote.

Thanks,
Gilles


PS:
[1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Bye,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-11 Thread Rich Megginson

On 09/11/2015 01:03 AM, Gilles Dubreuil wrote:

Hi,

Today in the #openstack-puppet channel a discussion about the pro and
cons of using domain parameter for Keystone V3 has been left opened.

The context

Domain names are needed in Openstack Keystone V3 for identifying users
or groups (of users) within different projects (tenant).
Users and groups are uniquely identified within a domain (or a realm as
opposed to project domains).
Then projects have their own domain so users or groups can be assigned
to them through roles.

In Kilo, Keystone V3 have been introduced as an experimental feature.
Puppet providers such as keystone_tenant, keystone_user,
keystone_role_user have been adapted to support it.
Also new ones have appeared (keystone_domain) or are their way
(keystone_group, keystone_trust).
And to be backward compatible with V2, the default domain is used when
no domain is provided.

In existing providers such as keystone_tenant, the domain can be either
part of the name or provided as a parameter:

A. The 'composite namevar' approach:

keystone_tenant {'projectX::domainY': ... }
  B. The 'meaningless name' approach:

   keystone_tenant {'myproject': name='projectX', domain=>'domainY', ...}

Notes:
  - Actually using both combined should work too with the domain
supposedly overriding the name part of the domain.
  - Please look at [1] this for some background between the two approaches:

The question
-
Decide between the two approaches, the one we would like to retain for
puppet-keystone.

Why it matters?
---
1. Domain names are mandatory in every user, group or project. Besides
the backward compatibility period mentioned earlier, where no domain
means using the default one.
2. Long term impact
3. Both approaches are not completely equivalent which different
consequences on the future usage.
4. Being consistent
5. Therefore the community to decide

The two approaches are not technically equivalent and it also depends
what a user might expect from a resource title.
See some of the examples below.

Because OpenStack DB tables have IDs to uniquely identify objects, it
can have several objects of a same family with the same name.
This has made things difficult for Puppet resources to guarantee
idem-potency of having unique resources.
In the context of Keystone V3 domain, hopefully this is not the case for
the users, groups or projects but unfortunately this is still the case
for trusts.

Pros/Cons
--
A.
   Pros
 - Easier names
   Cons
 - Titles have no meaning!
 - Cases where 2 or more resources could exists
 - More difficult to debug
 - Titles mismatch when listing the resources (self.instances)

B.
   Pros
 - Unique titles guaranteed
 - No ambiguity between resource found and their title
   Cons
 - More complicated titles

Examples
--
= Meaningless name example 1=
Puppet run:
   keystone_tenant {'myproject': name='project_A', domain=>'domain_1', ...}

Second run:
   keystone_tenant {'myproject': name='project_A', domain=>'domain_2', ...}

Result/Listing:

   keystone_tenant { 'project_A::domain_1':
 ensure  => 'present',
 domain  => 'domain_1',
 enabled => 'true',
 id  => '7f0a2b670f48437ba1204b17b7e3e9e9',
   }
keystone_tenant { 'project_A::domain_2':
 ensure  => 'present',
 domain  => 'domain_2',
 enabled => 'true',
 id  => '4b8255591949484781da5d86f2c47be7',
   }

= Composite name example 1  =
Puppet run:
   keystone_tenant {'project_A::domain_1', ...}

Second run:
   keystone_tenant {'project_A::domain_2', ...}

# Result/Listing
   keystone_tenant { 'project_A::domain_1':
 ensure  => 'present',
 domain  => 'domain_1',
 enabled => 'true',
 id  => '7f0a2b670f48437ba1204b17b7e3e9e9',
}
   keystone_tenant { 'project_A::domain_2':
 ensure  => 'present',
 domain  => 'domain_2',
 enabled => 'true',
 id  => '4b8255591949484781da5d86f2c47be7',
}

= Meaningless name example 2  =
Puppet run:
   keystone_tenant {'myproject1': name='project_A', domain=>'domain_1', ...}
   keystone_tenant {'myproject2': name='project_A', domain=>'domain_1',
description=>'blah'...}

Result: project_A in domain_1 has a description

= Composite name example 2  =
Puppet run:
   keystone_tenant {'project_A::domain_1', ...}
   keystone_tenant {'project_A::domain_1', description => 'blah', ...}

Result: Error because the resource must be unique within a catalog

My vote

I would love to have the approach A for easier name.
But I've seen the challenge of maintaining the providers behind the
curtains and the confusion it creates with name/titles and when not sure
about the domain we're dealing with.
Also I believe that supporting self.instances consistently with
meaningful name is saner.
Therefore I vote B


+1

Although, in my limited testing, I have not been able to get this to 
work with Puppet 3.8.  I've been following the link below to create a 

Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-11 Thread Rich Megginson

On 09/11/2015 04:17 AM, David Chadwick wrote:

Whichever approach is adopted you need to consider the future and the
longer term objective of moving to fully hierarchical names. I believe
the current Keystone approach is only an interim one, as it only
supports partial hierarchies. Fully hierarchical names has been
discussed in the Keystone group, but I believe that this has been
shelved until later in order to get a quick fix released now.


Can you explain more about "fully hierarchical names"?  What is the 
string representation?




regards

David

On 11/09/2015 08:03, Gilles Dubreuil wrote:

Hi,

Today in the #openstack-puppet channel a discussion about the pro and
cons of using domain parameter for Keystone V3 has been left opened.

The context

Domain names are needed in Openstack Keystone V3 for identifying users
or groups (of users) within different projects (tenant).
Users and groups are uniquely identified within a domain (or a realm as
opposed to project domains).
Then projects have their own domain so users or groups can be assigned
to them through roles.

In Kilo, Keystone V3 have been introduced as an experimental feature.
Puppet providers such as keystone_tenant, keystone_user,
keystone_role_user have been adapted to support it.
Also new ones have appeared (keystone_domain) or are their way
(keystone_group, keystone_trust).
And to be backward compatible with V2, the default domain is used when
no domain is provided.

In existing providers such as keystone_tenant, the domain can be either
part of the name or provided as a parameter:

A. The 'composite namevar' approach:

keystone_tenant {'projectX::domainY': ... }
  B. The 'meaningless name' approach:

   keystone_tenant {'myproject': name='projectX', domain=>'domainY', ...}

Notes:
  - Actually using both combined should work too with the domain
supposedly overriding the name part of the domain.
  - Please look at [1] this for some background between the two approaches:

The question
-
Decide between the two approaches, the one we would like to retain for
puppet-keystone.

Why it matters?
---
1. Domain names are mandatory in every user, group or project. Besides
the backward compatibility period mentioned earlier, where no domain
means using the default one.
2. Long term impact
3. Both approaches are not completely equivalent which different
consequences on the future usage.
4. Being consistent
5. Therefore the community to decide

The two approaches are not technically equivalent and it also depends
what a user might expect from a resource title.
See some of the examples below.

Because OpenStack DB tables have IDs to uniquely identify objects, it
can have several objects of a same family with the same name.
This has made things difficult for Puppet resources to guarantee
idem-potency of having unique resources.
In the context of Keystone V3 domain, hopefully this is not the case for
the users, groups or projects but unfortunately this is still the case
for trusts.

Pros/Cons
--
A.
   Pros
 - Easier names
   Cons
 - Titles have no meaning!
 - Cases where 2 or more resources could exists
 - More difficult to debug
 - Titles mismatch when listing the resources (self.instances)

B.
   Pros
 - Unique titles guaranteed
 - No ambiguity between resource found and their title
   Cons
 - More complicated titles

Examples
--
= Meaningless name example 1=
Puppet run:
   keystone_tenant {'myproject': name='project_A', domain=>'domain_1', ...}

Second run:
   keystone_tenant {'myproject': name='project_A', domain=>'domain_2', ...}

Result/Listing:

   keystone_tenant { 'project_A::domain_1':
 ensure  => 'present',
 domain  => 'domain_1',
 enabled => 'true',
 id  => '7f0a2b670f48437ba1204b17b7e3e9e9',
   }
keystone_tenant { 'project_A::domain_2':
 ensure  => 'present',
 domain  => 'domain_2',
 enabled => 'true',
 id  => '4b8255591949484781da5d86f2c47be7',
   }

= Composite name example 1  =
Puppet run:
   keystone_tenant {'project_A::domain_1', ...}

Second run:
   keystone_tenant {'project_A::domain_2', ...}

# Result/Listing
   keystone_tenant { 'project_A::domain_1':
 ensure  => 'present',
 domain  => 'domain_1',
 enabled => 'true',
 id  => '7f0a2b670f48437ba1204b17b7e3e9e9',
}
   keystone_tenant { 'project_A::domain_2':
 ensure  => 'present',
 domain  => 'domain_2',
 enabled => 'true',
 id  => '4b8255591949484781da5d86f2c47be7',
}

= Meaningless name example 2  =
Puppet run:
   keystone_tenant {'myproject1': name='project_A', domain=>'domain_1', ...}
   keystone_tenant {'myproject2': name='project_A', domain=>'domain_1',
description=>'blah'...}

Result: project_A in domain_1 has a description

= Composite name example 2  =
Puppet run:
   keystone_tenant {'project_A::domain_1', ...}
   keystone_tenant {'project_A::domain_1', description => 'blah', ...}

Result: Error 

[openstack-dev] [puppet][keystone] plan for domain name handling

2015-09-07 Thread Rich Megginson
This is to outline the plan for the implementation of "puppet-openstack 
will support Keystone domain scoped resource names

without a '::domain' in the name, only if the 'default_domain_id'
parameter in Keystone has _not_ been set.  That is, if the default
domain is 'Default'."

Details here:
http://lists.openstack.org/pipermail/openstack-dev/2015-August/072878.html

In the process of implementation, several bugs were found and fixed (for 
review) in the underlying code.

https://bugs.launchpad.net/puppet-keystone/+bug/1492843
- review https://review.openstack.org/221119
https://bugs.launchpad.net/puppet-keystone/+bug/1492846
- review https://review.openstack.org/221120
https://bugs.launchpad.net/puppet-keystone/+bug/1492848
- review https://review.openstack.org/221121

I think the best course of action will be to rebase both 
https://review.openstack.org/#/c/218044 and 
https://review.openstack.org/#/c/218059/ on top of these, in order for 
the https://review.openstack.org/#/c/218059 to be able to pass the gate 
tests.


The next step will be to get rid of the introspection/indirection calls, 
which were a mistake from the beginning (terrible for performance), but 
that will be easily done on top of the above patches.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'

2015-09-01 Thread Rich Megginson
To close this thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-August/072878.html


puppet-openstack will support Keystone domain scoped resource names 
without a '::domain' in the name, only if the 'default_domain_id' 
parameter in Keystone has _not_ been set. That is, if the default domain 
is 'Default'.  This means that if the user/operator doesn't care about 
domains at all, the operator doesn't have to deal with them.  However, 
once the user/operator uses `keystone_domain`, and uses `is_default => 
true`, this means the user/operator _must_ use '::domain' with _all_ 
domain scoped Keystone resource names.


In addition:

* In the OpenStack L release:
   If 'default_domain_id' is set, puppet will issue a warning if a name 
is used without '::domain'. I think this is a good thing to do, just in 
case someone sets the default_domain_id by mistake.


* In OpenStack M release:
   Puppet will issue a warning if a name is used without '::domain'.

* From Openstack N release:
   A name must be used with '::domain'.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] correction: Re: [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'

2015-09-01 Thread Rich Megginson

Slight correction below:

On 09/01/2015 10:56 AM, Rich Megginson wrote:
To close this thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-August/072878.html


puppet-openstack will support Keystone domain scoped resource names 
without a '::domain' in the name, only if the 'default_domain_id' 
parameter in Keystone has _not_ been set.


Or if the 'default_domain_id' parameter has been set to 'default'.

That is, if the default domain is 'Default'.  This means that if the 
user/operator doesn't care about domains at all, the operator doesn't 
have to deal with them.  However, once the user/operator uses 
`keystone_domain`, and uses `is_default => true`, this means the 
user/operator _must_ use '::domain' with _all_ domain scoped Keystone 
resource names.


Note that the domain named 'Default' with the UUID 'default' is created 
automatically by Keystone, so no need for puppet to create it or ensure 
that it exists.




In addition:

* In the OpenStack L release:
   If 'default_domain_id' is set,

or if 'default_domain_id' is not 'default',
puppet will issue a warning if a name is used without '::domain'. I 
think this is a good thing to do, just in case someone sets the 
default_domain_id by mistake.


* In OpenStack M release:
   Puppet will issue a warning if a name is used without '::domain'.

* From Openstack N release:
   A name must be used with '::domain'.


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'

2015-08-27 Thread Rich Megginson

On 08/27/2015 07:00 AM, Gilles Dubreuil wrote:


On 27/08/15 22:40, Gilles Dubreuil wrote:


On 27/08/15 16:59, Gilles Dubreuil wrote:


On 26/08/15 06:30, Rich Megginson wrote:

This concerns the support of the names of domain scoped Keystone
resources (users, projects, etc.) in puppet.

At the puppet-openstack meeting today [1] we decided that
puppet-openstack will support Keystone domain scoped resource names
without a '::domain' in the name, only if the 'default_domain_id'
parameter in Keystone has _not_ been set.  That is, if the default
domain is 'Default'.  In addition:

* In the OpenStack L release, if 'default_domain_id' is set, puppet will
issue a warning if a name is used without '::domain'.

The default domain is always set to 'default' unless overridden to
something else.

Just to clarify, I don't see any logical difference between the
default_domain_id to be 'default' or something else.


There is, however, a difference between explicitly setting the value to 
something other than 'default', and not setting it at all.


That is, if a user/operator specifies

  keystone_domain { 'someotherdomain':
is_default = true,
  }

then the user/operation is explicitly telling puppet-keystone that a 
non-default domain is being used, and that the user/operator is aware of 
domains, and will create domain scoped resources with the '::domain' in 
the name.




Per keystone.conf comment (as seen below) the default_domain_id,
whatever its value, is created as a valid domain.

# This references the domain to use for all Identity API v2 requests
(which are not aware of domains). A domain with this ID will be created
for you by keystone-manage db_sync in migration 008. The domain
referenced by this ID cannot be deleted on the v3 API, to prevent
accidentally breaking the v2 API. There is nothing special about this
domain, other than the fact that it must exist to order to maintain
support for your v2 clients. (string value)
#default_domain_id = default

To be able to test if a 'default_domain_id' is set or not, actually
translates to checking if the id is 'default' or something else.


Not exactly.  There is a difference between explicitly setting the 
value, and implicitly relying on the default 'default' value.



But I don't see the point here. If a user decides to change default' to
'This_is_the_domain_id_for_legacy_v2, how does this help?


If the user changes that, then that means the user has also decided to 
explicitly provided '::domain' in all domain scoped resource names.




If that makes sense then I would actually avoid the intermediate stage:

* In OpenStack L release:
Puppet will issue a warning if a name is used without '::domain'.

* From Openstack M release:
A name must be used with '::domain'.


* In the OpenStack M release, puppet will issue a warning if a name is
used without '::domain', even if 'default_domain_id' is not set.

Therefore the 'default_domain_id' is never 'not set'.


* In N (or possibly, O), resource names will be required to have
'::domain'.


I understand, from Openstack N release and ongoing, the domain would be
mandatory.

So I would like to revisit the list:

* In OpenStack L release:
   Puppet will issue a warning if a name is used without '::domain'.

* In OpenStack M release:
   Puppet will issue a warning if a name is used without '::domain'.

* From Openstack N release:
   A name must be used with '::domain'.



+1


The current spec [2] and current code [3] try to support names without a
'::domain' in the name, in non-default domains, provided the name is
unique across _all_ domains.  This will have to be changed in the
current code and spec.


Ack


[1]
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-08-25-15.01.html

[2]
http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html

[3]
https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L217



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ

[openstack-dev] [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'

2015-08-25 Thread Rich Megginson
This concerns the support of the names of domain scoped Keystone 
resources (users, projects, etc.) in puppet.


At the puppet-openstack meeting today [1] we decided that 
puppet-openstack will support Keystone domain scoped resource names 
without a '::domain' in the name, only if the 'default_domain_id' 
parameter in Keystone has _not_ been set.  That is, if the default 
domain is 'Default'.  In addition:


* In the OpenStack L release, if 'default_domain_id' is set, puppet will 
issue a warning if a name is used without '::domain'.
* In the OpenStack M release, puppet will issue a warning if a name is 
used without '::domain', even if 'default_domain_id' is not set.

* In N (or possibly, O), resource names will be required to have '::domain'.

The current spec [2] and current code [3] try to support names without a 
'::domain' in the name, in non-default domains, provided the name is 
unique across _all_ domains.  This will have to be changed in the 
current code and spec.



[1] 
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-08-25-15.01.html
[2] 
http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html
[3] 
https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L217



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Puppet] Keystone V2/V3 service endpoints

2015-08-14 Thread Rich Megginson

On 08/14/2015 06:51 AM, Matthew Mosesohn wrote:

Gilles,

I already considered this when looking at another openstackclient 
issue. Version 1.0.4 has almost no changes from 1.0.3, which is the 
official release for Kilo. Maybe we can get this keystone URL handling 
fix backported to the 1.0.X branch of openstackclient?


I think we need some sort of fix in openstacklib and/or puppet-keystone 
so that v3 providers that use token auth can replace any /v2.0 in the 
url with /v3, to override any settings of OS_URL or OS_AUTH_URL in ENV 
or openrc.




-Matthew

On Fri, Aug 14, 2015 at 3:47 PM, Gilles Dubreuil gil...@redhat.com 
mailto:gil...@redhat.com wrote:




On 14/08/15 20:45, Gilles Dubreuil wrote:


 On 13/08/15 23:29, Rich Megginson wrote:
 On 08/13/2015 12:41 AM, Gilles Dubreuil wrote:
 Hi Matthew,

 On 11/08/15 01:14, Rich Megginson wrote:
 On 08/10/2015 07:46 AM, Matthew Mosesohn wrote:
 Sorry to everyone for bringing up this old thread, but it
seems we may
 need more openstackclient/keystone experts to settle this.

 I'm referring to the comments in
 https://review.openstack.org/#/c/207873/
 Specifically comments from Richard Megginson and Gilles Dubreuil
 indicating openstackclient behavior for v3 keystone API.


 A few items seem to be under dispute:
 1 - Keystone should be able to accept v3 requests at
 http://keystone-server:5000/http://keystone-server:5000/
 I don't think so.  Keystone requires the version suffix
/v2.0 or
 /v3.

 Yes, if the public endpoint is set without a version then the
service
 can deal with either version.

 http://paste.openstack.org/raw/412819/

 That is not true for the admin endpoint (authentication is
already done,
 the admin services deals only with tokens), at least for now,
Keystone
 devs are working on it.

 I thought it worked like this - the openstack cli will infer
from the
 arguments if it should do v2 or v3 auth.  In the above case for v3,
 since you specify the username/password, osc knows it has to use
 password auth (as opposed to token auth), and since all of the
required
 v3 arguments are provided (v3 api version, domains for
user/project), it
 can use v3 password auth.  It knows it has to use the
/v3/auth/token
 path to get a token.

 Similarly for v2, since it only has username/password, no v3
api or v3
 domain arguments, it knows it has to use v2 password auth.  It
knows it
 has to use /v2.0/token to get a token.

 With the puppet-keystone code, since it uses TOKEN/URL, osc
cannot infer
 if it can use v2 or v3, so it requires the /v2.0 or /v3
suffix, and
 the api version.


 First of my apologies because I confused admin enpdoint with the
admin
 service (or whatever that's dubbed).

 As of Keystone v3 (not the API, the latest version of Keystone which
 supports both API v2.0 and V3), the OS_AUTH_URL doesn't need the
 version. That can be effectively any of the endpoints, either
the main
 (or public) by default on port 5000 or the admin by default on port
 35357, the third one internal pointing to either of the first
two ones.

 This is a behavior at Keystone level not at the OSC. I mean OSC will
 just reflect the http-api behavior.

 Now the admin service, the one needed for the OS-TOKEN (used for
 bootstrapping) needs a URL (OS_URL) with a version to work.

 ATM, the issue with puppet keystone is that endpoints, OS_URL and
 OS_AUTH_URL are walking on each others.



My latest update on this one, is that I found out while testing beaker
which is using OSC 1.0.3 is not handling OS_AUTH_URL properly.

I had been testing with OSC 1.5.1 and now latest 1.6.1 from Dolerean
repo, where the version-less URLs are working, but not with OSC 1.0.1:

--

# cat openrc
export OS_AUTH_URL=http://127.0.0.1:5000
export OS_USERNAME=adminv3
export OS_PASSWORD=testing
export OS_PROJECT_NAME=openstackv3
export OS_USER_DOMAIN_NAME=admin_domain
export OS_PROJECT_DOMAIN_NAME=admin_domain
export OS_IDENTITY_API_VERSION=3

# openstack endpoint list -f csv
ID,Region,Service Name,Service
Type,Enabled,Interface,URL

87b7db1b23df487bb4ec96de5aa3c271,RegionOne,keystone,identity,True,internal,http://127.0.0.1:5000;

d9b345109d8a4320ac0dd832d2532cce,RegionOne,keystone,identity,True,admin,http://127.0.0.1:35357;

f3a579a64f0241ef9aef3dc983e0fd4a,RegionOne,keystone,identity,True,public,http://127.0.0.1:5000;

--

# cat openrc_v2
export OS_AUTH_URL=http://[::1]:5000
export OS_USERNAME=admin
export OS_PASSWORD=a_big_secret
export OS_TENANT_NAME=openstack

# openstack endpoint list -f csv --long
ID,Region

Re: [openstack-dev] [Fuel][Puppet] Keystone V2/V3 service endpoints

2015-08-13 Thread Rich Megginson

On 08/13/2015 12:41 AM, Gilles Dubreuil wrote:

Hi Matthew,

On 11/08/15 01:14, Rich Megginson wrote:

On 08/10/2015 07:46 AM, Matthew Mosesohn wrote:

Sorry to everyone for bringing up this old thread, but it seems we may
need more openstackclient/keystone experts to settle this.

I'm referring to the comments in https://review.openstack.org/#/c/207873/
Specifically comments from Richard Megginson and Gilles Dubreuil
indicating openstackclient behavior for v3 keystone API.


A few items seem to be under dispute:
1 - Keystone should be able to accept v3 requests at
http://keystone-server:5000/http://keystone-server:5000/

I don't think so.  Keystone requires the version suffix /v2.0 or /v3.


Yes, if the public endpoint is set without a version then the service
can deal with either version.

http://paste.openstack.org/raw/412819/

That is not true for the admin endpoint (authentication is already done,
the admin services deals only with tokens), at least for now, Keystone
devs are working on it.


I thought it worked like this - the openstack cli will infer from the 
arguments if it should do v2 or v3 auth.  In the above case for v3, 
since you specify the username/password, osc knows it has to use 
password auth (as opposed to token auth), and since all of the required 
v3 arguments are provided (v3 api version, domains for user/project), it 
can use v3 password auth.  It knows it has to use the /v3/auth/token 
path to get a token.


Similarly for v2, since it only has username/password, no v3 api or v3 
domain arguments, it knows it has to use v2 password auth.  It knows it 
has to use /v2.0/token to get a token.


With the puppet-keystone code, since it uses TOKEN/URL, osc cannot infer 
if it can use v2 or v3, so it requires the /v2.0 or /v3 suffix, and 
the api version.





2 - openstackclient should be able to interpret v3 requests and append
v3/ to OS_AUTH_URL=http://keystone-server.5000/ or rewrite the URL
if it is set as
OS_AUTH_URL=http://keystone-server.5000/http://keystone-server.5000/

It does, if it can determine from the given authentication arguments if
it can do v3 or v2.0.


It effectively does, again, assuming the path doesn't contain a version
number (x.x.x.x:5000)


3 - All deployments require /etc/keystone/keystone.conf with a token
(and not simply use openrc for creating additional endpoints, users,
etc beyond keystone itself and an admin user)

No.  What I said about this issue was Most people using
puppet-keystone, and realizing Keystone resources on nodes that are not
the Keystone node, put a /etc/keystone/keystone.conf on that node with
the admin_token in it.

That doesn't mean the deployment requires /etc/keystone/keystone.conf.
It should be possible to realize Keystone resources on non-Keystone
nodes by using ENV or openrc or other means.


Agreed. Also keystone.conf is used only to bootstrap keystone
installation and create admin users, etc.



I believe it should be possible to set v2.0 keystone OS_AUTH_URL in
openrc and puppet-keystone + puppet-openstacklib should be able to
make v3 requests sensibly by manipulating the URL.

Yes.  Because for the puppet-keystone resource providers, they are coded
to a specific version of the api, and therefore need to be able to
set/override the OS_IDENTITY_API_VERSION and the version suffix in the URL.


No. To support V2 and V3, the OS_AUTH_URL must not contain any version
in order.

The less we deal with the version number the better!


Additionally, creating endpoints/users/roles shouldbe possible via
openrc alone.

Yes.


Yes, the openrc variables are used, if not available then the service
token from the keystone.conf is used.


It's not possible to write single node tests that can demonstrate this
functionality, which is why it probably went undetected for so long.

And since this is supported, we need tests for this.

I'm not sure what's the issue besides the fact keystone_puppet doesn't
generate a RC file once the admin user has been created. That is left to
be done by the composition layer. Although we might want to integrate that.

If that issue persists, assuming the AUTH_URL is free for a version
number and having an openrc in place, we're going to need a bug number
to track the investigation.


If anyone can speak up on these items, it could help influence the
outcome of this patch.

Thank you for your time.

Best Regards,
Matthew Mosesohn


Thanks,
Gilles


On Fri, Jul 31, 2015 at 6:32 PM, Rich Megginson rmegg...@redhat.com
mailto:rmegg...@redhat.com wrote:

 On 07/31/2015 07:18 AM, Matthew Mosesohn wrote:

 Jesse, thanks for raising this. Like you, I should just track
 upstream
 and wait for full V3 support.

 I've taken the quickest approach and written fixes to
 puppet-openstacklib and puppet-keystone:
 https://review.openstack.org/#/c/207873/
 https://review.openstack.org/#/c/207890/

 and again to Fuel-Library:
 https://review.openstack.org/#/c

Re: [openstack-dev] [Fuel][Puppet] Keystone V2/V3 service endpoints

2015-08-10 Thread Rich Megginson

On 08/10/2015 07:46 AM, Matthew Mosesohn wrote:
Sorry to everyone for bringing up this old thread, but it seems we may 
need more openstackclient/keystone experts to settle this.


I'm referring to the comments in https://review.openstack.org/#/c/207873/
Specifically comments from Richard Megginson and Gilles Dubreuil 
indicating openstackclient behavior for v3 keystone API.



A few items seem to be under dispute:
1 - Keystone should be able to accept v3 requests at 
http://keystone-server:5000/


I don't think so.  Keystone requires the version suffix /v2.0 or /v3.

2 - openstackclient should be able to interpret v3 requests and append 
v3/ to OS_AUTH_URL=http://keystone-server.5000/ or rewrite the URL 
if it is set as OS_AUTH_URL=http://keystone-server.5000/


It does, if it can determine from the given authentication arguments if 
it can do v3 or v2.0.


3 - All deployments require /etc/keystone/keystone.conf with a token 
(and not simply use openrc for creating additional endpoints, users, 
etc beyond keystone itself and an admin user)


No.  What I said about this issue was Most people using 
puppet-keystone, and realizing Keystone resources on nodes that are not 
the Keystone node, put a /etc/keystone/keystone.conf on that node with 
the admin_token in it.


That doesn't mean the deployment requires /etc/keystone/keystone.conf.  
It should be possible to realize Keystone resources on non-Keystone 
nodes by using ENV or openrc or other means.




I believe it should be possible to set v2.0 keystone OS_AUTH_URL in 
openrc and puppet-keystone + puppet-openstacklib should be able to 
make v3 requests sensibly by manipulating the URL.


Yes.  Because for the puppet-keystone resource providers, they are coded 
to a specific version of the api, and therefore need to be able to 
set/override the OS_IDENTITY_API_VERSION and the version suffix in the URL.


Additionally, creating endpoints/users/roles shouldbe possible via 
openrc alone.


Yes.

It's not possible to write single node tests that can demonstrate this 
functionality, which is why it probably went undetected for so long.


And since this is supported, we need tests for this.


If anyone can speak up on these items, it could help influence the 
outcome of this patch.


Thank you for your time.

Best Regards,
Matthew Mosesohn

On Fri, Jul 31, 2015 at 6:32 PM, Rich Megginson rmegg...@redhat.com 
mailto:rmegg...@redhat.com wrote:


On 07/31/2015 07:18 AM, Matthew Mosesohn wrote:

Jesse, thanks for raising this. Like you, I should just track
upstream
and wait for full V3 support.

I've taken the quickest approach and written fixes to
puppet-openstacklib and puppet-keystone:
https://review.openstack.org/#/c/207873/
https://review.openstack.org/#/c/207890/

and again to Fuel-Library:
https://review.openstack.org/#/c/207548/1

I greatly appreciate the quick support from the community to
find an
appropriate solution. Looks like I'm just using a weird edge case
where we're creating users on a separate node from where
keystone is
installed and it never got thoroughly tested, but I'm happy to fix
bugs where I can.


Most puppet deployments either realize all keystone resources on
the keystone node, or drop an /etc/keystone/keystone.conf with
admin token onto non-keystone nodes where additional keystone
resources need to be realized.



-Matthew

On Fri, Jul 31, 2015 at 3:54 PM, Jesse Pretorius
jesse.pretor...@gmail.com mailto:jesse.pretor...@gmail.com
wrote:

With regards to converting all services to use Keystone v3
endpoints, note
the following:

1) swift-dispersion currently does not support consuming
Keystone v3
endpoints [1]. There is a patch merged to master [2] to
fix that, but a
backport to kilo is yet to be done.
2) Each type (internal, admin, public) of endpoint created
with the Keystone
v3 API has its own unique id, unlike with the v2 API where
they're all
created with a single ID. This results in the keystone
client being unable
to read the catalog created via the v3 API when querying
via the v2 API. The
solution is to use the openstack client and to use the v3
API but this
obviously needs to be noted for upgrade impact and operators.
3) When glance is setup to use swift as a back-end,
glance_store is unable
to authenticate to swift when the endpoint it uses is a v3
endpoint. There
is a review to master in progress [3] to fix this which is
unlikely to make
it into kilo.

We (the openstack-ansible/os-ansible-deployment project

Re: [openstack-dev] [puppet][keystone] To always use or not use domain name?

2015-08-10 Thread Rich Megginson

On 08/10/2015 10:45 AM, Richard Raseley wrote:

On 08/07/2015 01:58 PM, Rich Megginson wrote:

Would someone who actually has to deploy/maintain puppet manifests and
supporting code chime in here?  How do you feel about having to ensure
that every domain scoped Keystone resource name must end in
::domain?  At the very least, if not using domains, and not changing
the default domain, you would have to ensure something::Default
_everywhere_ - and I do mean everywhere - every user and tenant name
use, including in keystone_user_role, and in other, higher level
classes/defines that refer to keystone users and tenants.

Anyone?

I also wonder how the Ansible folks are handling this, as they move to
support domains and other Keystone v3 features in openstack-ansible code?

As an operator, I like the ::$domain notation. I think the benefits it
brings in terms of clarity outweigh any downsides.


If you have to add ::$domain to all of your manifests and supporting 
code, what impact does that have?






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] How to give nested VM access to outside network?

2015-08-07 Thread Rich Megginson

On 08/04/2015 01:44 AM, Andreas Scheuring wrote:

Can you try answer 1 of [1]?

I've never tried it, but I heard from folks who configured it like that.
With this masquerading, your vm should be able to reach your 192.x
network. But as it's NAT it won't work the other way round (e.g.
establish a connection from outside into your vm)

The proper way would be to configure your provider network to match the
192.x subnet. In addition you would need to plug your 192.x interface
(eth0)? into the ovs br-ex. But be careful! This steps breaks
connectivity via this interface. So be sure that you're logged in via
another interface or via some vnc session.


Thanks.  This works:
1) Add this to local.conf before running stack.sh:

[[local|localrc]]
ADMIN_PASSWORD=secret
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,n-crt,n-novnc,mysql,rabbit,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta
SERVICE_HOST=127.0.0.1
NETWORK_GATEWAY=10.0.0.1
PUBLIC_NETWORK_GATEWAY=172.24.4.1
... other config ...

[[post-config|$Q_DHCP_CONF_FILE]]
[DEFAULT]
dnsmasq_dns_servers = 192.168.122.1

NOTE: If you are adding the above from a script as e.g. a here doc, 
don't forget to escape the $ e.g. [[post-config|\$Q_DHCP_CONF_FILE]]


2) Run this command after running stack.sh and before creating a vm:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Now, the nested VM can ping external IP addresses, and name server 
lookups work.




If you have further questions regarding provider networks, feel free to
ask again!



[1]
https://ask.openstack.org/en/question/44266/connect-vm-in-devstack-to-external-network/


On Mo, 2015-08-03 at 22:07 -0600, Rich Megginson wrote:

I'm running devstack in a VM (Fedora 21 host, EL 7.1.x VM) with a static
IP address (because dhcp was not working):

  cat  /etc/sysconfig/network-scripts/ifcfg-eth0 EOF
DEVICE=eth0
BOOTPROTO=static
DHCPCLASS=
HWADDR=$VM_MAC
IPADDR=192.168.122.5
NETMASK=255.255.255.0
GATEWAY=192.168.122.1
ONBOOT=yes
NM_CONTROLLED=no
TYPE=Ethernet
USERCTL=yes
PEERDNS=yes
DNS1=192.168.122.1
IPV6INIT=no
EOF

with Neutron networking enabled and Nova networking disabled:

[[local|localrc]]
IP_VERSION=4
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,n-crt,n-novnc,mysql,rabbit,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta
SERVICE_HOST=127.0.0.1
NETWORK_GATEWAY=10.0.0.1
PUBLIC_NETWORK_GATEWAY=172.24.4.1
...

I've followed this some, but I don't want to use the provider network:
http://docs.openstack.org/developer/devstack/guides/neutron.html

I've hacked the floating_ips exercise to use neutron networking commands:

http://ur1.ca/ncjm6

I can ssh into the nested VM, I can assign it a floating IP.

However, it cannot see the outside world.  From it, I can ping the
10.0.0.1 network and the 172.24.4.1 network, and even 192.168.122.5, but
not 192.168.122.1 or anything outside of the VM.

route looks like this: http://ur1.ca/ncjog

ip addr looks like this: http://ur1.ca/ncjop

Here is the entire output of stack.sh:
https://rmeggins.fedorapeople.org/stack.out

Here is the entire output of the exercise:
https://rmeggins.fedorapeople.org/exercise.out


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] To always use or not use domain name?

2015-08-07 Thread Rich Megginson

On 08/05/2015 07:48 PM, Gilles Dubreuil wrote:


On 06/08/15 10:16, Jamie Lennox wrote:


- Original Message -

From: Adam Young ayo...@redhat.com
To: openstack-dev@lists.openstack.org
Sent: Thursday, August 6, 2015 1:03:55 AM
Subject: Re: [openstack-dev] [puppet][keystone] To always use or not use domain 
name?

On 08/05/2015 08:16 AM, Gilles Dubreuil wrote:

While working on trust provider for the Keystone (V3) puppet module, a
question about using domain names came up.

Shall we allow or not to use names without specifying the domain name in
the resource call?

I have this trust case involving a trustor user, a trustee user and a
project.

For each user/project the domain can be explicit (mandatory):

trustor_name::domain_name

or implicit (optional):

trustor_name[::domain_name]

If a domain isn't specified the domain name can be assumed (intuited)
from either the default domain or the domain of the corresponding
object, if unique among all domains.

If you are specifying project by name, you must specify domain either
via name or id.  If you specify proejct by ID, you run the risk of
conflicting if you provide a domain speciffiedr (ID or name).


Although allowing to not use the domain might seems easier at first, I
believe it could lead to confusion and errors. The latter being harder
for the user to detect.

Therefore it might be better to always pass the domain information.

Probably a good idea, as it will catch if you are making some
assumption.  IE, I say  DomainX  ProejctQ  but I mean DomainQ ProjectQ.

Agreed. Like it or not domains are a major part of using the v3 api and if you 
want to use project names and user names we should enforce that domains are 
provided.
Particularly at the puppet level (dealing with users who should understand this 
stuff) anything that tries to guess what the user means is a bad idea and going 
to lead to confusion when it breaks.


I totally agree.

Thanks for participating


Would someone who actually has to deploy/maintain puppet manifests and 
supporting code chime in here?  How do you feel about having to ensure 
that every domain scoped Keystone resource name must end in ::domain?  
At the very least, if not using domains, and not changing the default 
domain, you would have to ensure something::Default _everywhere_ - and 
I do mean everywhere - every user and tenant name use, including in 
keystone_user_role, and in other, higher level classes/defines that 
refer to keystone users and tenants.


Anyone?

I also wonder how the Ansible folks are handling this, as they move to 
support domains and other Keystone v3 features in openstack-ansible code?






I believe using the full domain name approach is better.
But it's difficult to tell because in puppet-keystone and
puppet-openstacklib now rely on python-openstackclient (OSC) to
interface with Keystone. Because we can use OSC defaults
(OS_DEFAULT_DOMAIN or equivalent to set the default domain) doesn't
necessarily makes it the best approach. For example hard coded value [1]
makes it flaky.

[1]
https://github.com/openstack/python-openstackclient/blob/master/openstackclient/shell.py#L40

To help determine the approach to use, any feedback will be appreciated.

Thanks,
Gilles


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] How to give nested VM access to outside network?

2015-08-03 Thread Rich Megginson

On 08/03/2015 10:07 PM, Rich Megginson wrote:
I'm running devstack in a VM (Fedora 21 host, EL 7.1.x VM) with a 
static IP address (because dhcp was not working):


cat  /etc/sysconfig/network-scripts/ifcfg-eth0 EOF
DEVICE=eth0
BOOTPROTO=static
DHCPCLASS=
HWADDR=$VM_MAC
IPADDR=192.168.122.5
NETMASK=255.255.255.0
GATEWAY=192.168.122.1
ONBOOT=yes
NM_CONTROLLED=no
TYPE=Ethernet
USERCTL=yes
PEERDNS=yes
DNS1=192.168.122.1
IPV6INIT=no
EOF

with Neutron networking enabled and Nova networking disabled:

[[local|localrc]]
IP_VERSION=4
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,n-crt,n-novnc,mysql,rabbit,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta 


SERVICE_HOST=127.0.0.1
NETWORK_GATEWAY=10.0.0.1
PUBLIC_NETWORK_GATEWAY=172.24.4.1
...

I've followed this some, but I don't want to use the provider network: 
http://docs.openstack.org/developer/devstack/guides/neutron.html


I've hacked the floating_ips exercise to use neutron networking commands:

http://ur1.ca/ncjm6

I can ssh into the nested VM, I can assign it a floating IP.

However, it cannot see the outside world.  From it, I can ping the 
10.0.0.1 network and the 172.24.4.1 network, and even 192.168.122.5, 
but not 192.168.122.1 or anything outside of the VM.


route looks like this: http://ur1.ca/ncjog

ip addr looks like this: http://ur1.ca/ncjop

Here is the entire output of stack.sh: 
https://rmeggins.fedorapeople.org/stack.out


Here is the entire output of the exercise: 
https://rmeggins.fedorapeople.org/exercise.out


More neutron information: http://ur1.ca/ncjt5

This was working with nova networking - the nested VM had full access to 
the outside.





__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][neutron] How to give nested VM access to outside network?

2015-08-03 Thread Rich Megginson
I'm running devstack in a VM (Fedora 21 host, EL 7.1.x VM) with a static 
IP address (because dhcp was not working):


cat  /etc/sysconfig/network-scripts/ifcfg-eth0 EOF
DEVICE=eth0
BOOTPROTO=static
DHCPCLASS=
HWADDR=$VM_MAC
IPADDR=192.168.122.5
NETMASK=255.255.255.0
GATEWAY=192.168.122.1
ONBOOT=yes
NM_CONTROLLED=no
TYPE=Ethernet
USERCTL=yes
PEERDNS=yes
DNS1=192.168.122.1
IPV6INIT=no
EOF

with Neutron networking enabled and Nova networking disabled:

[[local|localrc]]
IP_VERSION=4
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,n-crt,n-novnc,mysql,rabbit,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta
SERVICE_HOST=127.0.0.1
NETWORK_GATEWAY=10.0.0.1
PUBLIC_NETWORK_GATEWAY=172.24.4.1
...

I've followed this some, but I don't want to use the provider network: 
http://docs.openstack.org/developer/devstack/guides/neutron.html


I've hacked the floating_ips exercise to use neutron networking commands:

http://ur1.ca/ncjm6

I can ssh into the nested VM, I can assign it a floating IP.

However, it cannot see the outside world.  From it, I can ping the 
10.0.0.1 network and the 172.24.4.1 network, and even 192.168.122.5, but 
not 192.168.122.1 or anything outside of the VM.


route looks like this: http://ur1.ca/ncjog

ip addr looks like this: http://ur1.ca/ncjop

Here is the entire output of stack.sh: 
https://rmeggins.fedorapeople.org/stack.out


Here is the entire output of the exercise: 
https://rmeggins.fedorapeople.org/exercise.out



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Puppet] Keystone V2/V3 service endpoints

2015-07-31 Thread Rich Megginson

On 07/31/2015 07:18 AM, Matthew Mosesohn wrote:

Jesse, thanks for raising this. Like you, I should just track upstream
and wait for full V3 support.

I've taken the quickest approach and written fixes to
puppet-openstacklib and puppet-keystone:
https://review.openstack.org/#/c/207873/
https://review.openstack.org/#/c/207890/

and again to Fuel-Library:
https://review.openstack.org/#/c/207548/1

I greatly appreciate the quick support from the community to find an
appropriate solution. Looks like I'm just using a weird edge case
where we're creating users on a separate node from where keystone is
installed and it never got thoroughly tested, but I'm happy to fix
bugs where I can.


Most puppet deployments either realize all keystone resources on the 
keystone node, or drop an /etc/keystone/keystone.conf with admin token 
onto non-keystone nodes where additional keystone resources need to be 
realized.




-Matthew

On Fri, Jul 31, 2015 at 3:54 PM, Jesse Pretorius
jesse.pretor...@gmail.com wrote:

With regards to converting all services to use Keystone v3 endpoints, note
the following:

1) swift-dispersion currently does not support consuming Keystone v3
endpoints [1]. There is a patch merged to master [2] to fix that, but a
backport to kilo is yet to be done.
2) Each type (internal, admin, public) of endpoint created with the Keystone
v3 API has its own unique id, unlike with the v2 API where they're all
created with a single ID. This results in the keystone client being unable
to read the catalog created via the v3 API when querying via the v2 API. The
solution is to use the openstack client and to use the v3 API but this
obviously needs to be noted for upgrade impact and operators.
3) When glance is setup to use swift as a back-end, glance_store is unable
to authenticate to swift when the endpoint it uses is a v3 endpoint. There
is a review to master in progress [3] to fix this which is unlikely to make
it into kilo.

We (the openstack-ansible/os-ansible-deployment project) are tracking these
issues and doing tests to figure out all the bits. These are the bugs we've
hit so far. Also note that there is a WIP patch to gate purely on Keystone
v3 API's which is planned to become voting (hopefully) by the end of this
cycle.

[1] https://bugs.launchpad.net/swift/+bug/1468374
[2] https://review.openstack.org/195131
[3] https://review.openstack.org/193422

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Puppet] Keystone V2/V3 service endpoints

2015-07-30 Thread Rich Megginson

On 07/30/2015 08:53 AM, Matthew Mosesohn wrote:

Hi Rich,

Sorry, I meant to link [0] to https://bugs.launchpad.net/keystone/+bug/1470635
More responses inline.


On Thu, Jul 30, 2015 at 5:38 PM, Rich Megginson rmegg...@redhat.com wrote:

There is a patch upstream[1] that enables V3 service endpoint
creation, but v2.0 users/clients will not see these endpoints.


Right.  I'm not sure what the problem is - v3 clients can see the endpoints
created with v2.


But not vice versa.


But you said We are using Keystone v2.0 API everywhere currently. - Are
you trying to move to use v3?

Not yet.


I'm still not sure what the problem is.  Are you trying to move to use v3
for auth, and use v3 resources like domains?

No. Avoiding that is better for now.

Option 1: Keep v2.0 API data in openrc and hack v3 keystone providers,
updating ENV with 2 vars:
OS_IDENTITY_API_VERSION=3
OS_AUTH_URL=$(echo $OLD_OS_AUTH_URL | sed -e s'/v2.0/v3/')
or their equivalent in command line parameters


I don't understand.  When you say v3 keystone providers are you talking
about the puppet-keystone openstack.rb providers?  If so, they already do
something similar to the above.

Yes, the puppet-keystone openstack.rb providers. Almost, except they don't
update the identity_api_version. It just passes the version from ENV
or $HOME/openrc

https://github.com/openstack/puppet-openstacklib/blob/master/lib/puppet/provider/openstack/credentials.rb


Ok, I see.  The intention of that code is that, if you specify something 
in ENV/openrc, it will override the default settings in the provider.


What if the puppet-keystone openstack providers did not allow you to 
override the api version/url version with ENV/openrc?  Would that solve 
the problem?





Option 2: Update to v3 Identity API for all services and accept the
unmerged patch[1]. This route requires the most disruptive changes
because of [0] and I would like to avoid this.


I don't understand why you need [1] which makes keystone_endpoint use the v3
api.  v3 clients can see endpoints created with the v2 api.

Updating all clients to v3 is more effort at this point and v3
keystone is not targeted for Fuel 7.0.


Option 3: Revert puppet-keystone to version 5.1.0 which is before v3
became mandatory.


I'd like to see what is possible with Option 1 because it should be
possible to use the existing providers in puppet-keystone master
without too many hoops to make them all work cleanly. I'd really
prefer being able to provide all these parameters to the keystone
provider, rather than relying on the /root/openrc file or exporting
shell vars, but getting this issue unstuck is really the most
important.


I'm still not sure what the issue is, what you are prevented from doing.

The issue, concisely, is creating service_endpoints with v2.0 API and
other keystone resources with v3 API using one /root/openrc file.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Puppet] Keystone V2/V3 service endpoints

2015-07-30 Thread Rich Megginson

On 07/30/2015 08:24 AM, Matthew Mosesohn wrote:

Hi all,

It seems that I've reached an impasse with
https://review.openstack.org/#/q/topic:bp/detach-components-from-controllers,n,z
in Keystone with regards to Kilo puppet manifests. One of the
objectives is the ability to deploy Keystone on a separate node from
the controllers.

Here is what we know:

We are using Keystone v2.0 API everywhere currently.

Most Keystone providers for users, services, etc, use V3 API through
openstackclient

Keystone provider for service endpoints is still on V2. This is
because v2.0 clients can't see v3 endpoints. It's by design to not
be forward compatible[0].


I don't understand - [0] is a link to an ansible review?


There is a patch upstream[1] that enables V3 service endpoint
creation, but v2.0 users/clients will not see these endpoints.


Right.  I'm not sure what the problem is - v3 clients can see the 
endpoints created with v2.


Note that in Keystone, resources like roles, services, and endpoints are 
_not_ domain scoped, and therefore do not need to use the v3 api to CRUD.




Identity v2 and v3 behavior of openstackclient are vastly different.
There is nothing backward/forward compatible among the two, so it's a
hassle to deal with them in parallel.

openstackclient fails on v3 parameters unless version 3 is explicitly enabled.


But you said We are using Keystone v2.0 API everywhere currently. - 
Are you trying to move to use v3?




What we can do to go forward?


I'm still not sure what the problem is.  Are you trying to move to use 
v3 for auth, and use v3 resources like domains?




Option 1: Keep v2.0 API data in openrc and hack v3 keystone providers,
updating ENV with 2 vars:
OS_IDENTITY_API_VERSION=3
OS_AUTH_URL=$(echo $OLD_OS_AUTH_URL | sed -e s'/v2.0/v3/')
or their equivalent in command line parameters


I don't understand.  When you say v3 keystone providers are you 
talking about the puppet-keystone openstack.rb providers?  If so, they 
already do something similar to the above.




Option 2: Update to v3 Identity API for all services and accept the
unmerged patch[1]. This route requires the most disruptive changes
because of [0] and I would like to avoid this.


I don't understand why you need [1] which makes keystone_endpoint use 
the v3 api.  v3 clients can see endpoints created with the v2 api.




Option 3: Revert puppet-keystone to version 5.1.0 which is before v3
became mandatory.


I'd like to see what is possible with Option 1 because it should be
possible to use the existing providers in puppet-keystone master
without too many hoops to make them all work cleanly. I'd really
prefer being able to provide all these parameters to the keystone
provider, rather than relying on the /root/openrc file or exporting
shell vars, but getting this issue unstuck is really the most
important.


I'm still not sure what the issue is, what you are prevented from doing.




[0] https://review.openstack.org/#/c/196943/
[1] https://review.openstack.org/#/c/178456/24

Best Regards,
Matthew Mosesohn

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Puppet] Keystone V2/V3 service endpoints

2015-07-30 Thread Rich Megginson

On 07/30/2015 10:54 AM, Matthew Mosesohn wrote:

The problem appears to be much more concise. The provider sets
identity_api_version okay, but doesn't alter OS_AUTH_URL api version.
That's the only reason why this is breaking.

It is broken in 2 places: in openstacklib's credentials class and in
keystone base provider. The keystone auth_endpoint logic seems to just
duplicate that of openstacklib's credentials class, so I think it
would make sense to consolidate that. I will prepare a patch to
puppet-keystone and puppet-openstacklib to address this.


Ok.  Please add me to the reviews.



On Thu, Jul 30, 2015 at 6:14 PM, Rich Megginson rmegg...@redhat.com wrote:

On 07/30/2015 08:53 AM, Matthew Mosesohn wrote:

Hi Rich,

Sorry, I meant to link [0] to
https://bugs.launchpad.net/keystone/+bug/1470635
More responses inline.


On Thu, Jul 30, 2015 at 5:38 PM, Rich Megginson rmegg...@redhat.com
wrote:

There is a patch upstream[1] that enables V3 service endpoint
creation, but v2.0 users/clients will not see these endpoints.


Right.  I'm not sure what the problem is - v3 clients can see the
endpoints
created with v2.


But not vice versa.


But you said We are using Keystone v2.0 API everywhere currently. - Are
you trying to move to use v3?

Not yet.


I'm still not sure what the problem is.  Are you trying to move to use v3
for auth, and use v3 resources like domains?

No. Avoiding that is better for now.

Option 1: Keep v2.0 API data in openrc and hack v3 keystone providers,
updating ENV with 2 vars:
OS_IDENTITY_API_VERSION=3
OS_AUTH_URL=$(echo $OLD_OS_AUTH_URL | sed -e s'/v2.0/v3/')
or their equivalent in command line parameters


I don't understand.  When you say v3 keystone providers are you talking
about the puppet-keystone openstack.rb providers?  If so, they already do
something similar to the above.

Yes, the puppet-keystone openstack.rb providers. Almost, except they don't
update the identity_api_version. It just passes the version from ENV
or $HOME/openrc


https://github.com/openstack/puppet-openstacklib/blob/master/lib/puppet/provider/openstack/credentials.rb


Ok, I see.  The intention of that code is that, if you specify something in
ENV/openrc, it will override the default settings in the provider.

What if the puppet-keystone openstack providers did not allow you to
override the api version/url version with ENV/openrc?  Would that solve the
problem?


Option 2: Update to v3 Identity API for all services and accept the
unmerged patch[1]. This route requires the most disruptive changes
because of [0] and I would like to avoid this.


I don't understand why you need [1] which makes keystone_endpoint use the
v3
api.  v3 clients can see endpoints created with the v2 api.

Updating all clients to v3 is more effort at this point and v3
keystone is not targeted for Fuel 7.0.


Option 3: Revert puppet-keystone to version 5.1.0 which is before v3
became mandatory.


I'd like to see what is possible with Option 1 because it should be
possible to use the existing providers in puppet-keystone master
without too many hoops to make them all work cleanly. I'd really
prefer being able to provide all these parameters to the keystone
provider, rather than relying on the /root/openrc file or exporting
shell vars, but getting this issue unstuck is really the most
important.


I'm still not sure what the issue is, what you are prevented from doing.

The issue, concisely, is creating service_endpoints with v2.0 API and
other keystone resources with v3 API using one /root/openrc file.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Yanis Guenane core

2015-07-27 Thread Rich Megginson

On 07/27/2015 01:06 PM, Emilien Macchi wrote:

Puppet group,

Yanis has been working in our group for a while now.
He has been involved in a lot of tasks, let me highlight some of them:

* Many times, involved in improving consistency across our modules.
* Strong focus on data binding, backward compatibility and flexibility.
* Leadership on cookiebutter project [1].
* Active participation to meetings, always with actions, and thoughts.
* Beyond the stats, he has a good understanding of our modules, and
quite good number of reviews, regularly.

Yanis is for our group a strong asset and I would like him part of our
core team.
I really think his involvement, regularity and strong knowledge in
Puppet OpenStack will really help to make our project better and stronger.

I would like to open the vote to promote Yanis part of Puppet OpenStack
core reviewers.


+1



Best regards,

[1] https://github.com/openstack/puppet-openstack-cookiecutter


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet][keystone] How to uniquely name Keystone v3 resources in puppet?

2015-06-22 Thread Rich Megginson
The problem with puppet-keystone and Keystone v3 domains is naming of 
puppet resources contained within domains - users, groups, projects.


Suppose you have an admin user in domain dom1 and an admin user in 
domain dom2.  How do you declare these in puppet?  You can't do


keystone_user { 'admin': domain = 'dom1'}
and
keystone_user { 'admin': domain = 'dom2'}

AFAIK, resource names in puppet have to be unique.  I have introduced 
the string :: as the domain delimiter in puppet code. This allows you 
to fully qualify resource names with the domain in all domain scoped 
resources.


keystone_user { 'admin::dom1': }
and
keystone_user { 'admin': domain = 'dom2'}

This is a valid puppet manifest - the resource names are unique.

How do I refer to the admin user?  For example, I want to have a role 
assignment:


keystone_user_role { 'admin::dom2@project::somedomain': roles = 
['adminrole'] }


How do I have an autorequire for the user?  That is, I can't know, in 
the autorequire method in the keystone_user_role.rb type code, if the 
resource name is going to be 'admin' or 'admin::dom2'.  If I just do 
autorequire for ['admin'], that will cause problems if a) there is no 
'admin' user because all 'admin' users are fully qualified b) the user 
named 'admin' is the one in dom1


I could avoid using autorequire and force the use of explicit requires 
= 'name' in every keystone_user_role, but that would break existing 
manifests.


I could use autorequire, but force manifest writers to be consistent 
with fully qualified resource naming.  That is, the manifest writer 
would have to know that the resource name of the admin user in dom1 is 
always 'admin::dom1' and the resource name of the admin user in dom2 is 
always 'admin', and they must be used that way, consistently, 
everywhere, even in deeply nested classes/defines.


Another problem is with the puppet provider self.instances method. This 
method queries all of the external resources of a given type and 
constructs something like a named puppet resource representing the 
external resource.  For example: the keystone_user self.instances does 
'openstack user list --long' and creates named puppet resources.


What happens if openstack user list --long returns admin in dom1 and 
admin in dom2?  How does self.instances name these uniquely?  If it 
always names these fully qualified with ::domain, this will cause 
problems.  This could be handled with some extra logic:
1) If name is unique (i.e. there is only one user named 'admin' in all 
domains), then just create the resource with the name 'admin' - no 
::domain.  This would require self.instances to keep track of every 
returned resource e.g. collect all of the list results in a hash, and 
only instantiate the resources once it is known that the name is unique.
2) If the resource is in the designated default domain (i.e. the 
domain id matches [identity] default_domain_id, or 'default' if 
[identity] default_domain_id not configured), then use the name without 
the domain


2 seems like the intuitive way to go - works well with existing 
manifests which are not yet domain aware, and things like puppet 
resource keystone_user [name] will almost always work as expected. But 
this means that manifest writers have to know that all puppet-keystone 
manifests and code need to know what the domain name corresponding to 
the default_domain_id is, and know that the resource in that domain 
should always be referred to as 'name', not 'name::domain'.  For 
example, if you have user 'someuser' in domain 'Default', and domain 
'Default' is the default domain with an id of 'default', all resources 
should should use keystone_user { 'name': } and keystone_user_role { 
'name@project': }


self.instances could return the fully qualified name _and_ the short 
name, where the short name is the resource in the default domain. This 
might cause confusion with puppet resource 'resource_name', when the 
user sees both 'admin' and 'admin::dom1' but at least the user will see 
one or the other, and both puppet resource keystone_user admin and 
puppet resource keystone_user admin::dom1 will work.  This would 
require some minor adjustment to the currently proposed patches for the 
self.instances - self.instances would need to be aware of the default 
domain and construct short names for those resources, and self.prefetch 
would also need to know to use the shortnamed resource for the default 
domain.  In this case, the short name would be an alias for the fully 
qualified name, sort of like a dns query where both the shortnames and 
fqdns are returned.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][OSC] Keystone v3 user create --project $projid does not add user to project?

2015-06-18 Thread Rich Megginson

On 06/18/2015 06:43 AM, Raildo Mascena wrote:

Hi Rick,

In Keystone, Domains are the container of users, so a user belongs to 
a domain and you can grant role assignments for projects.


With this call that you made, you will set the project default to this 
user, after that you need to grant a role for this user in this project.


So, you can do:* openstack role add --user USER_NAME --project 
TENANT_ID ROLE_NAME*

*
*
and after that, you can verify if the assignment works 
doing:* openstack role list --user USER_NAME --projec TENANT_ID*

*
*
You can find more information about this 
here:**http://docs.openstack.org/user-guide-admin/manage_projects_users_and_roles.html or 
find us on #openstack-keystone


Yes, I realize that.

My issue was that in going from Keystone v2.0 to v3, openstack user 
create --project $project changed behavior - in v2.0, openstack user 
create --project $project adds the user as a member of the $project.  I 
wanted to know if this was 1) intentional behavior in v2.0 2) 
intentionally removed in v3.  I'm trying to make puppet-keystone work 
with v3, while at the same time making sure all of the existing puppet 
manifests work exactly as before.  Since this has changed, I had to work 
around it, by making the puppet-keystone user create function also add 
the user to the project.


https://review.openstack.org/#/c/174976/24/lib/puppet/provider/keystone_user/openstack.rb



Cheers,

Raildo Mascena


On Tue, Jun 16, 2015 at 1:52 PM Rich Megginson rmegg...@redhat.com 
mailto:rmegg...@redhat.com wrote:


Using admin token credentials with the Keystone v2.0 API and the
openstackclient, doing this:

# openstack project create bar --enable
# openstack user create foo --project bar --enable ...

The user will be added to the project.

Using admin token credentials with the Keystone v3 API and the
openstackclient, using the v3 policy file with is_admin:1 added just
about everywhere, doing this:

# openstack project create bar --domain Default --enable
# openstack user create foo --domain Default --enable --project
$project_id_of_bar ...

The user will NOT be added to the project.

Is this intentional?  Am I missing some sort of policy to allow user
create to add the user to the given project?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][OSC] Keystone v3 user create --project $projid does not add user to project?

2015-06-16 Thread Rich Megginson
Using admin token credentials with the Keystone v2.0 API and the 
openstackclient, doing this:


# openstack project create bar --enable
# openstack user create foo --project bar --enable ...

The user will be added to the project.

Using admin token credentials with the Keystone v3 API and the 
openstackclient, using the v3 policy file with is_admin:1 added just 
about everywhere, doing this:


# openstack project create bar --domain Default --enable
# openstack user create foo --domain Default --enable --project 
$project_id_of_bar ...


The user will NOT be added to the project.

Is this intentional?  Am I missing some sort of policy to allow user 
create to add the user to the given project?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][puppet] Federation using ipsilon

2015-06-13 Thread Rich Megginson

On 06/12/2015 07:30 PM, Adam Young wrote:

On 06/12/2015 04:53 PM, Rich Megginson wrote:
I've done a first pass of setting up a puppet module to configure 
Keystone to use ipsilon for federation, using 
https://github.com/richm/puppet-apache-auth-mods, and a version of 
ipsilon-client-install with patches 
https://fedorahosted.org/ipsilon/ticket/141 and 
https://fedorahosted.org/ipsilon/ticket/142, and a heavily modified 
version of the ipa/rdo federation setup scripts - 
https://github.com/richm/rdo-vm-factory.


I would like some feedback from the Keystone and puppet folks about 
this approach.


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I take it this is not WebSSO yet, but only Federation.

Around here...

https://github.com/richm/puppet-apache-auth-mods/blob/master/manifests/keystone_ipsilon.pp#L64 



You would need to have the trusted dashboard, etc.


Right.  In order to do websso, there is some additional setup that needs 
to be done in the apache conf for the keystone wsgi virtual hosts (which 
is in the rdo-federation-setup script).  There is also some additional 
configuration to do to Horizon to enable federated auth and/or websso.





But I think that is what you intend.


Right.  What I've done so far is only the first step.


However, without an ECP setup, we really have no way to test it.

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][puppet] Federation using ipsilon

2015-06-12 Thread Rich Megginson
I've done a first pass of setting up a puppet module to configure 
Keystone to use ipsilon for federation, using 
https://github.com/richm/puppet-apache-auth-mods, and a version of 
ipsilon-client-install with patches 
https://fedorahosted.org/ipsilon/ticket/141 and 
https://fedorahosted.org/ipsilon/ticket/142, and a heavily modified 
version of the ipa/rdo federation setup scripts - 
https://github.com/richm/rdo-vm-factory.


I would like some feedback from the Keystone and puppet folks about this 
approach.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] stubs considered harmful in spec tests

2015-06-04 Thread Rich Megginson
Summary - In puppet module spec tests, do not use stubs, which means 
the method will be called 0 or more times.  Instead, use expects, 
which means the method must be called exactly 1 time, or some other more 
fine grained expectation method stubber.


Our puppet unit tests mostly use rspec, but use Mocha 
http://gofreerange.com/mocha/docs/index.html for object mocking and 
method stubbing.


I have already run into several cases where the spec test result is 
misleading because stubs was used instead of expects, and I have 
spent a lot of time trying to figure out why a method was not called, 
because adding an expectation like


  provider.class.stubs(:openstack)
.with('endpoint', 'list', '--quiet', 
'--format', 'csv', [])
.returns('ID,Region,Service Name,Service 
Type,Enabled,Interface,URL

2b38d77363194018b2b9b07d7e6bdc13,RegionOne,keystone,identity,True,admin,http://127.0.0.1:5002/v3;
3097d316c19740b7bc866c5cb2d7998b,RegionOne,keystone,identity,True,internal,http://127.0.0.1:5001/v3;
3445dddcae1b4357888ee2a606ca1585,RegionOne,keystone,identity,True,public,http://127.0.0.1:5000/v3;
')

implies that openstack endpoint list will be called.

If at all possible, we should use an explicit expectation.  For example, 
in the above case, use expects instead:


  provider.class.expects(:openstack)
.with('endpoint', 'list', '--quiet', 
'--format', 'csv', [])
.returns('ID,Region,Service Name,Service 
Type,Enabled,Interface,URL

2b38d77363194018b2b9b07d7e6bdc13,RegionOne,keystone,identity,True,admin,http://127.0.0.1:5002/v3;
3097d316c19740b7bc866c5cb2d7998b,RegionOne,keystone,identity,True,internal,http://127.0.0.1:5001/v3;
3445dddcae1b4357888ee2a606ca1585,RegionOne,keystone,identity,True,public,http://127.0.0.1:5000/v3;
')

This means that openstack endpoint list must be called once, and only 
once.  For odd cases where you want a method to be called some certain 
number of times, or to return different values each time it is called, 
the Expectation class 
http://gofreerange.com/mocha/docs/Mocha/Expectation.html should be used 
to modify the initial expectation.


Unfortunately, I don't think we can just do a blanket 
s/stubs/expects/g in *_spec.rb, without incurring a lot of test 
failures.  So perhaps we don't have to do this right away, but I think 
future code reviews should -1 any spec file that uses stubs without a 
strong justification.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo v3 identity problems

2015-06-03 Thread Rich Megginson

On 06/03/2015 10:29 AM, Amy Zhang wrote:

Hi guys,

I have installed Kilo and try to use identity v3. I am using v3 policy 
file. I changed the domain_id for cloud admin as default. As cloud 
admin, I tried openstack domain list and got the error message 
saying that I was not authorized.


The part I changed in policy.json:

cloud_admin: rule:admin_required and domain_id:default,


The error I got from openstack domain list:

ERROR: openstack You are not authorized to perform the requested 
action: identity:create_domain (Disable debug mode to suppress these 
details.) (HTTP 403) (Request-ID: 
req-2f42b1da-9933-4494-9b39-c1664d154377)



Has anyone tried identity v3 in Kilo? Did you have this problem? Any 
suggestions?


Can you paste your policy file somewhere?  Did you restart the keystone 
service after changing your policy?  Can you provide your exactly 
openstack command line arguments and/or the rc file you sourced into 
your shell environment before running openstack?




Thanks
Amy
--
Best regards,
Amy (Yun Zhang)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Re: Puppet-OpenStack API providers - Follow up

2015-05-20 Thread Rich Megginson

On 05/20/2015 12:17 AM, Gilles Dubreuil wrote:

Hi,

Just wanted to add, for clarification, the need to restructure the
openstacklib.

The use of resource[:auth] parameter is causing the providers to behave
differently depending on the context, as expressed earlier in this thread.

I would to highlight the fact that this change is driven by design.
Therefore the need for a fix, sooner than later, especially at a time of
the entire stack of provider to shift to Keystone V3. And this is
actually a critical time because of patches waiting upon this structural
change.

The bp/auth-consolidation (sorry for the *bad* name) patches show
authentication doesn't have to be using parameters, the latter was a
mistake from a types/providers suitability viewpoint.

The restructure (bp/auth-consolidation) is not only working but also
simplifies the code which is going to make the development/maintenance
of types/providers faster.


The current proposal for puppet-openstacklib is 
https://review.openstack.org/#/c/180407/


This breaks the published API in Juno as used by puppet-keystone. For 
example, the Juno branch code:

https://github.com/stackforge/puppet-keystone/blob/stable/juno/lib/puppet/provider/openstack.rb

def request(service, action, object, credentials, *properties)

but in the new code, there is no object request method, only self.request:

defself.request(service,action,*args)

Is it ok to break this API?  I don't think anyone is actually using it, 
but I have no idea.


Doing it this way also means that the change cannot be implemented 
incrementally - all of the type/provider/spec/other code has to be 
changed at the same time in a single commit (or live with failing gate 
tests).  Is this ok?


I have been working on the Keystone v3 code for a long time, and had a 
working implementation.
Does the Puppet OpenStack community think that it is the right thing to 
do to wait for the new authentication restructuring code to be merged, 
before the Keystone v3 code is merged?





If anyone has issues/questions with this please speak up!

Thank you,
Gilles

On 07/05/15 11:50, Gilles Dubreuil wrote:


On 07/05/15 11:33, Colleen Murphy wrote:


On Wed, May 6, 2015 at 6:26 PM, Gilles Dubreuil gil...@redhat.com
mailto:gil...@redhat.com wrote:

 It seems ~/.openrc is the only default [...]

The extras module places it at '/root/openrc' [1], so either the extras
module should be changed or the providers should look in /root/openrc,
either way it should be consistent.


Agreed.

Let's use ~/openrc for now then.


Colleen

[1] 
http://git.openstack.org/cgit/stackforge/puppet-openstack_extras/tree/manifests/auth_file.pp#n86


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposal to configure Oslo libraries

2015-05-08 Thread Rich Megginson

On 05/08/2015 07:17 AM, Doug Hellmann wrote:

Excerpts from Ben Nemec's message of 2015-05-07 15:57:48 -0500:

I don't know much about the puppet project organization so I won't
comment on whether 1 or 2 is better, but a big +1 to having a common
way to configure Oslo opts.  Consistency of those options across all
services is one of the big reasons we pushed so hard for the libraries
to own their option definitions, so this would align well with the way
the projects are consumed.

- -Ben

Well said, Ben.

Doug


On 05/07/2015 03:19 PM, Emilien Macchi wrote:

Hi,

I think one of the biggest challenges working on Puppet OpenStack
modules is to keep code consistency across all our modules (~20).
If you've read the code, you'll see there is some differences
between RabbitMQ configuration/parameters in some modules and this
is because we did not have the right tools to make it properly. A
lot of the duplicated code we have comes from Oslo libraries
configuration.

Now, I come up with an idea and two proposals.

Idea 

We could have some defined types to configure oslo sections in
OpenStack configuration files.

Something like: define oslo::messaging::rabbitmq( $user, $password
) { ensure_resource($name, 'oslo_messaging_rabbit/rabbit_userid',
{'value' = $user}) ... }

Usage in puppet-nova: ::oslo::messaging::rabbitmq{'nova_config':
user = 'nova', password = 'secrete', }

And patch all our modules to consume these defines and finally
have consistency at the way we configure Oslo projects (messaging,
logging, etc).

Proposals =

#1 Creating puppet-oslo ... and having oslo::messaging::rabbitmq,
oslo::messaging::qpid, ..., oslo::logging, etc. This module will be
used only to configure actual Oslo libraries when we deploy
OpenStack. To me, this solution is really consistent with how
OpenStack works today and is scalable as soon we contribute Oslo
configuration changes in this module.


+1 - For the Keystone authentication options, I think it is important to 
encapsulate this and hide the implementation from the other services as 
much as possible, to make it easier to use all of the different types of 
authentication supported by Keystone now and in the future.  I would 
think that something similar applies to the configuration of other 
OpenStack services.




#2 Using puppet-openstacklib ... and having
openstacklib::oslo::messaging::(...) A good thing is our modules
already use openstacklib. But openstacklib does not configure
OpenStack now, it creates some common defines  classes that are
consumed in other modules.


I personally prefer #1 because: * it's consistent with OpenStack. *
I don't want openstacklib the repo where we put everything common.
We have to differentiate *common-in-OpenStack* and
*common-in-our-modules*. I think openstacklib should continue to be
used for common things in our modules, like providers, wrappers,
database management, etc. But to configure common OpenStack bits
(aka Oslo©), we might want to create puppet-oslo.

As usual, any thoughts are welcome,

Best,



__





OpenStack Development Mailing List (not for usage questions)

Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-BEGIN PGP SIGNATURE-


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Re: Puppet-OpenStack API providers - Follow up

2015-05-05 Thread Rich Megginson

On 05/05/2015 12:20 PM, Colleen Murphy wrote:
I'm cross-posting to the dev list since this conversation should be 
happening there and is related to another thread there.


Ok.  I'm not replying puppet-openstack.


I'm going to top-post a summary and then respond inline.

The summary so far is that puppet-openstacklib provides a way to pass 
in credentials to an API-driven puppet type via an auth parameter[1] 
included in the types like so[2]. The benefit of this is that a user 
could create additional API resources, such as keystone_user, by 
passing in credentials directly to the type (presumably via hiera) 
without having to read credentials out of keystone.conf. The desire 
for something like this was outlined in the initial aviator 
blueprint[3] (the openstackclient blueprint[4] changed some of the 
design parameters, but not that one).


self.instances and self.prefetch are class methods provided by puppet 
that types and providers typically override. These methods are unable 
to read type parameters, as far as I can tell, because they do not 
have a specific instance from which to look up those parameters. In 
our implementation, self.instances exists so that the command `puppet 
resource keystone_user` works and returns a list of keystone_users, 
and we don't use self.prefetch. So, the way the providers are intended 
to work right now is: when creating a single resource, to run a custom 
'instances' object method to list resources and check for existence, 
which can use username/password credentials passed in to the resource 
OR use username/password credentials set as environment variables OR 
fall back to reading admin_token credentials from keystone.conf, as it 
always did; when run in `puppet resource` mode, it runs self.instances 
which can only use credentials set as environment variables or read 
credentials from keystone.conf since it has no way to accept an auth 
parameter.


There are a couple of problems with this approach, one outlined by 
Gilles below and another that I'm just noticing.


On Mon, May 4, 2015 at 8:34 PM, Gilles Dubreuil 
gil.dubre...@gmail.com mailto:gil.dubre...@gmail.com wrote:


Hi Colleen,

The issue is about having to deal with 2 different code paths
because authentication could be optionally passed to a resource
instance where it can't when dealing with self.instances.
Its creates many complications down the road.
I initially expressed that from a technical OO point of view,
although as you said it doesn't really matter.
So, let's look at those examples:
https://review.openstack.org/#/c/178385/3/lib/puppet/provider/keystone.rb

https://review.openstack.org/#/c/178456/6/lib/puppet/provider/keystone_endpoint/openstack.rb

Providers should not have to go through that.

That is indeed pretty awful, I had no idea this would get so 
complicated when I initially wrote this.


I don't believe the final implementation will be that bad.  I don't 
think we have to support v2 any more.  We will just assume we always 
have enough information to use v3 auth and v3 api.  Based on our 
discussions here and on IRC, this is possible and is desireable.




I'm also noticing what looks like a major flaw in that the object 
instances method seems almost entirely useless. A resource looks up 
the full list of resources but only ever stores one[5]. So the goal of 
replacing self.prefetch with an object method that had access to the 
auth params is just not accomplished at all.



This is why I think avoiding passing authentication details in
some case (instance) should be avoided.
The authentication is provided by another layer, basically the
environment, whether that comes from.

Given the added complexity that you pointed out and the fact that the 
motivation behind some of that complexity is moot, I'm inclined to 
agree. We could avoid this complexity and be be able to take advantage 
of self.prefetch (which should speed up performance) if we did away 
with the auth parameter and the methods needed to accommodate that 
parameter.


The modules do not use that auth parameter themselves, it's intended 
as an add-on if users wanted to include extra keystone_user, etc, 
resources in their profiles and didn't want to worry about running it 
on the keystone node. I rather doubt anyone is actually using that 
yet, and I'm curious if anyone has a desire to keep it around.


I cannot figure out a use case for having per-resource authentication 
parameters.  It seems that the likely use case would be for per-run auth 
parameters, set via env. or config file.


However, in openstack.rb self.request - what if self.request were 
changed to do the same thing as the instance request method?




So if the providers could both read a config file (keystone.conf, 
glance-api.conf, etc) and read environment variables for 
authentication, would that be desireable?


Yes.



The auth param can accept a path to an openrc file, but if we just 

Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-04 Thread Rich Megginson

On 05/04/2015 06:03 PM, Mathieu Gagné wrote:

On 2015-05-04 7:35 PM, Rich Megginson wrote:

The way authentication works with the Icehouse branch is that
puppet-keystone reads the admin_token and admin_endpoint from
/etc/keystone/keystone.conf and passes these to the keystone command via
the OS_SERVICE_TOKEN env. var. and the --os-endpoint argument,
respectively.

This will not work on a node where Keystone is not installed (unless you
copy /etc/keystone/keystone.conf to all of your nodes).

I am assuming there are admins/operators that have actually deployed
OpenStack using puppet on nodes where Keystone is not installed?

We are provisioning keystone resources from a privileged keystone node
which accepts the admin_token. All other keystone servers has the
admin_token_auth middleware removed for obvious security reasons.



If so, how?  How do you specify the authentication credentials?  Do you
use environment variables?  If so, how are they specified?

When provisioning resources other than Keystones ones, we use custom
puppet resources and the credentials are passed as env variables to the
exec command. (they are mainly based on exec resources)


I'm talking about the case where you are installing an OpenStack service 
other than Keystone using puppet, and that puppet code for that module 
needs to create some sort of Keystone resource.


For example, install Glance on a node other than the Keystone node. 
puppet-glance is going to call class glance::keystone::auth, which will 
call keystone::resource::service_identity, which will call keystone_user 
{ $name }.  The openstack provider used by keystone_user is going to 
need Keystone admin credentials in order to create the user.  How are 
you passing those credentials?  As env. vars?  How?





I'm starting to think about moving away from env variables and use a
configuration file instead. I'm not sure yet about the implementation
details but that's the main idea.


Is there a standard openrc location?  Could openrc be extended to hold 
parameters such as the default domain to use for Keystone resources?  
I'm not talking about OS_DOMAIN_NAME, OS_USER_DOMAIN_NAME, etc. which 
are used for _authentication_, not resource creation.






For Keystone v3, in order to use v3 for authentication, and in order to
use the v3 identity api, there must be some way to specify the various
domains to use - the domain for the user, the domain for the project, or
the domain to get a domain scoped token.

If I understand correctly, you have to scope the user to a domain and
scope the project to a domain: user1@domain1 wishes to get a token
scoped to project1@domain2 to manage resources within the project?


Correct.  So you need to have some way to specify the domain for the 
user and the domain for the project (or the domain for a domain scoped 
token which allows you to manage resources within a domain). These 
correspond to the openstack command line parameters:

http://www.jamielennox.net/blog/2015/02/17/loading-authentication-plugins/
./myapp --os-auth-plugin v3password --help





There is a similar issue when creating domain scoped resources like
users and projects.  As opposed to editing dozens of manifests to add
domain parameters to every user and project (and the classes that call
keystone_user/tenant, and the classes that call those classes, etc.), is
there some mechanism to specify a default domain to use?  If not, what
about using the same mechanism used today to specify the Keystone
credentials?

I see there is support for a default domain in keystone.conf. You will
find it defined by the identity/default_domain_id=default config value.

Is this value not usable?


It is usable, and will be used, _only on Keystone nodes_. If you are on 
a node without Keystone, where will the default id come from?



And is it reasonable to assume the domain
default will always be present?


Yes, but that may not be the default domain.  Consider the case where 
you may want to separate user accounts from service pseudo accounts, 
by having them in separate domains.




Or is the question more related to the need to somehow override this
value in Puppet?


If there is a standard Puppet mechanism for being able to provide global 
parameters, other than something like rc files or environment variables, 
then yes.






The goal is that all keystone domain scoped resources will eventually
require specifying a domain, but that will take quite a while and I
would like to provide an incremental upgrade path.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-04 Thread Rich Megginson

On 05/04/2015 07:52 PM, Mathieu Gagné wrote:

On 2015-05-04 9:15 PM, Rich Megginson wrote:

On 05/04/2015 06:03 PM, Mathieu Gagné wrote:

On 2015-05-04 7:35 PM, Rich Megginson wrote:

The way authentication works with the Icehouse branch is that
puppet-keystone reads the admin_token and admin_endpoint from
/etc/keystone/keystone.conf and passes these to the keystone command via
the OS_SERVICE_TOKEN env. var. and the --os-endpoint argument,
respectively.

This will not work on a node where Keystone is not installed (unless you
copy /etc/keystone/keystone.conf to all of your nodes).

I am assuming there are admins/operators that have actually deployed
OpenStack using puppet on nodes where Keystone is not installed?

We are provisioning keystone resources from a privileged keystone node
which accepts the admin_token. All other keystone servers has the
admin_token_auth middleware removed for obvious security reasons.



If so, how?  How do you specify the authentication credentials?  Do you
use environment variables?  If so, how are they specified?

When provisioning resources other than Keystones ones, we use custom
puppet resources and the credentials are passed as env variables to the
exec command. (they are mainly based on exec resources)

I'm talking about the case where you are installing an OpenStack service
other than Keystone using puppet, and that puppet code for that module
needs to create some sort of Keystone resource.

For example, install Glance on a node other than the Keystone node.
puppet-glance is going to call class glance::keystone::auth, which will
call keystone::resource::service_identity, which will call keystone_user
{ $name }.  The openstack provider used by keystone_user is going to
need Keystone admin credentials in order to create the user.

We fixed that part by not provisioning Keystone resources from Glance
nodes but from Keystone nodes instead.

We do not allow our users to create users/groups/projects, only a user
with the admin role can do it. So why would you want to store/use admin
credentials on an unprivileged nodes such as Glance? IMO, the glance
user shouldn't be able to create/edit/delete users/projects/endpoints,
that's the keystone nodes' job.


Ok.  You don't need the Keystone superuser admin credentials on the 
Glance node.


Is the puppet-glance code completely separable so that you can call only 
glance::keystone::auth (or other classes that use Keystone resources) 
from the Keystone node, and all of the other puppet-glance code on the 
Glance node?  Does the same apply to all of the other puppet modules?




If you do not wish to explicitly define Keystone resources for Glance on
Keystone nodes but instead let Glance nodes manage their own resources,
you could always use exported resources.

You let Glance nodes export their keystone resources and then you ask
Keystone nodes to realize them where admin credentials are available. (I
know some people don't really like exported resources for various reasons)


I'm not familiar with exported resources.  Is this a viable option that 
has less impact than just requiring Keystone resources to be realized on 
the Keystone node?






How are you passing those credentials?  As env. vars?  How?

As stated, we use custom Puppet resources (defined types) which are
mainly wrapper around exec. You can pass environment variable to exec
through the environment parameter. I don't like it but that's how I did
it ~2 years ago. I haven't changed it due to lack of need to change it.
This might change soon with Keystone v3.


Ok.





I'm starting to think about moving away from env variables and use a
configuration file instead. I'm not sure yet about the implementation
details but that's the main idea.

Is there a standard openrc location?  Could openrc be extended to hold
parameters such as the default domain to use for Keystone resources?
I'm not talking about OS_DOMAIN_NAME, OS_USER_DOMAIN_NAME, etc. which
are used for _authentication_, not resource creation.

I'm not aware of any standard openrc location other than ~/.openrc
which needs to be sourced before running any OpenStack client commands.

I however understand what you mean. I do not have any idea on how I
would implement it. I'm still hoping someday to be enlightened by a
great solution.

I'm starting to think about some sort of credentials vault. You store
credentials in it and you tell your resource to use that specific
credentials. You then no longer need to pass around 6-7
variables/parameters.


I'm sure Adam Young has some ideas about this . . .





There is a similar issue when creating domain scoped resources like
users and projects.  As opposed to editing dozens of manifests to add
domain parameters to every user and project (and the classes that call
keystone_user/tenant, and the classes that call those classes, etc.), is
there some mechanism to specify a default domain to use?  If not, what
about using the same mechanism used today to specify the Keystone

[openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?

2015-05-04 Thread Rich Megginson
I'm currently working on Keystone v3 support in the openstack puppet 
modules.


The way authentication works with the Icehouse branch is that 
puppet-keystone reads the admin_token and admin_endpoint from 
/etc/keystone/keystone.conf and passes these to the keystone command via 
the OS_SERVICE_TOKEN env. var. and the --os-endpoint argument, respectively.


This will not work on a node where Keystone is not installed (unless you 
copy /etc/keystone/keystone.conf to all of your nodes).


I am assuming there are admins/operators that have actually deployed 
OpenStack using puppet on nodes where Keystone is not installed?


If so, how?  How do you specify the authentication credentials?  Do you 
use environment variables?  If so, how are they specified?


For Keystone v3, in order to use v3 for authentication, and in order to 
use the v3 identity api, there must be some way to specify the various 
domains to use - the domain for the user, the domain for the project, or 
the domain to get a domain scoped token.


There is a similar issue when creating domain scoped resources like 
users and projects.  As opposed to editing dozens of manifests to add 
domain parameters to every user and project (and the classes that call 
keystone_user/tenant, and the classes that call those classes, etc.), is 
there some mechanism to specify a default domain to use?  If not, what 
about using the same mechanism used today to specify the Keystone 
credentials?


The goal is that all keystone domain scoped resources will eventually 
require specifying a domain, but that will take quite a while and I 
would like to provide an incremental upgrade path.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Status of Beaker jobs

2015-05-01 Thread Rich Megginson

On 05/01/2015 05:35 PM, Emilien Macchi wrote:

Hi,

TL;DR: Please review:
https://review.openstack.org/#/q/status:open++topic:bug-1444736,n,z


I would like to share some progress/feedback about integrating Beaker in
our modules.
First, everything is updated in the Google Doc [1]

My main blockers were related to packaging in UCA, I had sometimes to do
ugly Puppet patches (as dependencies) to fix some bugs.

Packaging is missing
===
Some projects don't have packages (Gnocchi  Tuskar), so I can't tests
it in OpenStack CI for now.

Bugs in packaging
===
Ensure python-mysqldb is installed  before MySQL db_sync (ceilometer)
https://review.openstack.org/#/c/177593/

Allow to deploy Sahara on Ubuntu
https://review.openstack.org/#/c/179136/

FWaaS: update packaging for Debian  Ubuntu
https://review.openstack.org/#/c/179210/

chmod /etc/neutron to 755 instead of 750
https://review.openstack.org/#/c/179235/

Fix Sahara installation for Ubuntu (workaround)
https://bugs.launchpad.net/puppet-sahara/+bug/1450659
https://bugs.launchpad.net/cloud-archive/+bug/1450945
https://review.openstack.org/#/c/179276/

Ensure /var/log/ironic ownership
https://review.openstack.org/#/c/179531/
https://bugs.launchpad.net/cloud-archive/+bug/1450942

python-openstackclient

For Keystone v3 API, we *need* 1.0.3 at least if we want to have our new
providers working[2], it should be in UCA soon. Fedora will have the
right package too.


Looks like RDO kilo now has python-openstackclient 1.0.3.  I've been 
using that since last night and it has been working fine.



I'm helping Rich to test the feature with
https://review.openstack.org/#/c/178828/ (Beaker will be very useful to
actually test the whole thing with patch dependencies).


Thanks Emilien!




TripleO
===
I did not start beaker tests for now, because I first want to see unit
testing. (If someone is volunteer? or I'll make it next week).


Have a great week-end,

[1]
https://docs.google.com/spreadsheets/d/1i2z5QyvukHCWU_JjkWrTpn-PexPBnt3bWPlQfiDGsj8/edit#gid=0
[2]
https://review.openstack.org/#/q/status:open+project:stackforge/puppet-keystone+branch:master+topic:bp/api-v3-support,n,z


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problem about Juno Keystone Identity V3

2015-04-03 Thread Rich Megginson

On 04/03/2015 09:09 AM, Amy Zhang wrote:

Hi guys,

I have done switching Keystone Identity V2 to V3 in Icehouse and it 
works perfect. However, I use the same way to switch Keystone Identity 
V2 to V3 in Juno, it doesn't work. It give me the error: ERROR: 
openstack Internal Server Error (HTTP 500).


What steps did you follow?  Links?



I traced back to the code, find the error comes from apache, but it 
doesn't make any sense that apache has problem, it might be some 
problem of keystone which lead to error of apache.


Does any one have any idea of it? Thanks!
--
Best regards,
Amy (Yun Zhang)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problem about Juno Keystone Identity V3

2015-04-03 Thread Rich Megginson

On 04/03/2015 12:11 PM, Fox, Kevin M wrote:

We've run into issues with nova+neutron and keystone v3 with Juno.

Specifically this one:
https://bugs.launchpad.net/nova/+bug/1424462

But there may be others that I don't know about.


Yes, there are some problems with some components that don't use the 
keystone_authtoken section in their configs or otherwise can't use the 
latest python-keystonemiddleware.


But I would like to know what steps worked to switch Keystone Identity 
from V2 to V3 in Icehouse that don't work for Juno.




Thanks,
Kevin


*From:* Rich Megginson [rmegg...@redhat.com]
*Sent:* Friday, April 03, 2015 10:52 AM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] Problem about Juno Keystone Identity V3

On 04/03/2015 09:09 AM, Amy Zhang wrote:

Hi guys,

I have done switching Keystone Identity V2 to V3 in Icehouse and it 
works perfect. However, I use the same way to switch Keystone 
Identity V2 to V3 in Juno, it doesn't work. It give me the error: 
ERROR: openstack Internal Server Error (HTTP 500).


What steps did you follow?  Links?



I traced back to the code, find the error comes from apache, but it 
doesn't make any sense that apache has problem, it might be some 
problem of keystone which lead to error of apache.


Does any one have any idea of it? Thanks!
--
Best regards,
Amy (Yun Zhang)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Requesting FFE for last few remaining patches for domain configuration SQL support

2015-03-17 Thread Rich Megginson

On 03/17/2015 01:26 PM, Henry Nash wrote:

Hi

Prior to Kilo, Keystone supported the ability for its Identity 
backends to be specified on a domain-by-domain basis - primarily so 
that different domains could be backed by different LDAP servers. In 
this previous support, you defined the domain-specific configuration 
options in a separate config file (one for each domain that was not 
using the default options). While functional, this can make onboarding 
new domains somewhat problematic since you need to create the domains 
via REST and then create a config file and push it out to the keystone 
server (and restart the server). As part of the Keystone Kilo release 
we are are supporting the ability to manage these domain-specific 
configuration options via REST (and allow them to be stored in the 
Keystone SQL database). More detailed information can be found in the 
spec for this change at: https://review.openstack.org/#/c/123238/


The actual code change for this is split into 11 patches (to make it 
easier to review), the majority of which have already merged - and the 
basic functionality described is already functional.  There are some 
final patches that are in-flight, a few of which are unlikely to meet 
the m3 deadline.  These relate to:


1) Migration assistance for those that want to move from the current 
file-based domain-specific configuration files to the SQL based 
support (i.e. a one-off upload of their config files).  This is 
handled in the keystone-manage tool - See: 
https://review.openstack.org/160364 https://review.openstack.org/160364
2) The notification between multiple keystone server processes that a 
domain has a new configuration (so that a restart of keystone is not 
required) - See: https://review.openstack.org/163322 
https://review.openstack.org/163322
3) Support of substitution of sensitive config options into 
whitelisted options (this might actually make the m3 deadline anyway) 
- See https://review.openstack.org/159928 
https://review.openstack.org/159928


Given that we have the core support for this feature already merged, I 
am requesting an FFE to enable these final patches to be merged ahead 
of RC.


This would be nice to use in puppet-keystone for domain configuration.  
Is there support planned for the openstack client?




Henry


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] HTTPD Config

2015-03-06 Thread Rich Megginson

On 03/06/2015 12:37 AM, Matthias Runge wrote:

On 05/03/15 19:49, Adam Young wrote:


I'd like to drop port 5000 all-together, as we are using a port assigned
to a different service.  35357 is also problematic as it is in the
middle of the Ephemeral range.  Since we are  talking about running
everything in one web server anywya, using port 80/443 for all web stuff
is the right approach.

I have thought about this as well. The issue here is, URLs for keystone
and horizon will probably clash.
(is https://server/api/... a keystone or a call for horizon).

No matter what we do in devstack, this is something, horizon and
keystone devs need to fix first. E.g. in Horizon, we still discover hard
coded URLs here and there. To catch that kind of things, I had a patch
up for review, to easily configure moving Horizon from using http server
root to something different.

I would expect the same thing for keystone, too.


It's the same thing for almost every project.  I've been working on the 
puppet-openstack code quite a bit lately, and there are many, many 
places that assume keystone is listening to http(s)://host:5000/v2.0 or 
host:35357/v2.0.



Matthias


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate] Core reviewer update

2015-02-12 Thread Rich Megginson

On 02/12/2015 08:51 AM, Hayes, Graham wrote:

On 12/02/15 15:50, Kiall Mac Innes wrote:

+1 - Tim has been giving good reviews over the last few months and will
make a good addition..

Thanks,
Kiall

On 12/02/15 15:40, Vinod Mangalpally wrote:

Hello Designate folks,

Betsy Luzader (betsy) resigned from her core reviewer position on
Designate. In order to keep the momentum of reviews in Designate going,
I'd like to propose adding Tim Simmons (timsim) to designate-core.

For context on Designate reviews and who has been active, please
see http://stackalytics.com/report/contribution/designate-group/30

Designate members, please reply with your vote on the proposed change.

Thanks
vinod


+1 - I think Tim will make a great addition :)


+1




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate] DNS Services PTL Candidacy

2014-09-23 Thread Rich Megginson

On 09/23/2014 05:11 AM, Mac Innes, Kiall wrote:


Hi all,

I'd like to announce my candidacy for the DNS Services Program PTL 
position.


I’ve been involved in Designate since day one, as the both original 
author and as pseudo-PTL pre-incubation. Designate and the DNS 
Services program have come a long way since the project was first 
introduced to StackForge under my lead, and while we’re far from done, 
I feel I’m more than capable and best positioned to bring the project 
through to fruition.


Additionally, I manage the team at HP running the largest known public 
and production deployment of Designate for HP Cloud – Giving me the 
operational experience necessary to guide the project towards meeting 
real world operational requirements.


Thanks – Kiall



+1 - Kiall is very knowledgeable about DNS and has brought Designate a 
long way in a short time.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 0.0.0.0:5000

2014-07-17 Thread Rich Megginson

On 07/16/2014 10:40 PM, Joe Jiang wrote:

Hi all,
Thanks for your responds.

I try to running # sudo semanage port -l|grep 5000 in my envrionment 
and get same infomation.

 ...
 commplex_main_port_t tcp 5000
 commplex_main_port_t udp 5000
then, I wanna remove this port(5000) from SELinux policy rules list 
use this command(semanage port -d -p tcp -t commplex_port_t 5000),
the console echo is /usr/sbin/semanage: Port tcp/5000 is defined in 
policy, cannot be deleted, and 'udp/5000' is same reply.
Some sounds[1] say, this port is declared in the corenetwork source 
policy which is compiled in the base module.

So, Have to recompile selinux module?


I think that's the only way to do it if you want to relabel port 5000.





Thanks.
Joe.

[1]
http://www.redhat.com/archives/fedora-selinux-list/2009-September/msg00056.html





 Another problem with port 5000 in Fedora, and probably more recent
 versions of RHEL, is the selinux policy:

 # sudo semanage port -l|grep 5000
 ...
 commplex_main_port_t tcp 5000
 commplex_main_port_t udp 5000

 There is some service called commplex that has already claimed port
 5000 for its use, at least as far as selinux goes.






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 0.0.0.0:5000

2014-07-16 Thread Rich Megginson

On 07/16/2014 08:43 AM, Brian Haley wrote:

On 07/16/2014 07:34 AM, Joe Jiang wrote:

Hi all,

When I just set up my develope environment use devstack at CentOS 6.5,
that fetch devstack source via github.com and checkout stable/icehouse branch.
and bellow[1] is the error log fragment.
I'm not sure if I am ok to ask my question in this mail list or not,
because I search all of the web and still not resolve it.
Anyway, I need you help. and, your help is a highly appreciated.

I tripped over a similar issue with Horizon yesterday and found this bug:

https://bugs.launchpad.net/devstack/+bug/1340660

The error I saw was with port 80, so I was able to disable Horizon to get around
it, and I didn't see anything obvious in the apache error logs to explain it.

-Brian


Another problem with port 5000 in Fedora, and probably more recent 
versions of RHEL, is the selinux policy:


# sudo semanage port -l|grep 5000
...
commplex_main_port_t   tcp  5000
commplex_main_port_t   udp  5000

There is some service called commplex that has already claimed port 
5000 for its use, at least as far as selinux goes.






2014-07-16 11:08:53.282 | + sudo sed '/^Listen/s/^.*$/Listen 0.0.0.0:80/' -i
/etc/httpd/conf/httpd.conf
2014-07-16 11:08:53.295 | + sudo rm -f '/var/log/httpd/horizon_*'
2014-07-16 11:08:53.310 | + sudo sh -c 'sed -e 
2014-07-16 11:08:53.310 | s,%USER%,stack,g;
2014-07-16 11:08:53.310 | s,%GROUP%,stack,g;
2014-07-16 11:08:53.310 | s,%HORIZON_DIR%,/opt/stack/horizon,g;
2014-07-16 11:08:53.310 | s,%APACHE_NAME%,httpd,g;
2014-07-16 11:08:53.310 | s,%DEST%,/opt/stack,g;
2014-07-16 11:08:53.310 | s,%HORIZON_REQUIRE%,,g;
2014-07-16 11:08:53.310 |  /home/devstack/files/apache-horizon.template

/etc/httpd/conf.d/horizon.conf'

2014-07-16 11:08:53.321 | + start_horizon
2014-07-16 11:08:53.321 | + restart_apache_server
2014-07-16 11:08:53.321 | + restart_service httpd
2014-07-16 11:08:53.321 | + is_ubuntu
2014-07-16 11:08:53.321 | + [[ -z rpm ]]
2014-07-16 11:08:53.322 | + '[' rpm = deb ']'
2014-07-16 11:08:53.322 | + sudo /sbin/service httpd restart
2014-07-16 11:08:53.361 | Stopping httpd:  [FAILED]
2014-07-16 11:08:53.532 | Starting httpd: httpd: Could not reliably determine
the server's fully qualified domain name, using 127.0.0.1 for ServerName
2014-07-16 11:08:53.533 | (98)Address already in use: make_sock: could not bind
to address [::]:5000
2014-07-16 11:08:53.533 | (98)Address already in use: make_sock: could not bind
to address 0.0.0.0:5000
2014-07-16 11:08:53.533 | no listening sockets available, shutting down
2014-07-16 11:08:53.533 | Unable to open logs
2014-07-16 11:08:53.547 |  [FAILED]
2014-07-16 11:08:53.549 | + exit_trap
2014-07-16 11:08:53.549 | + local r=1
2014-07-16 11:08:53.549 | ++ jobs -p
2014-07-16 11:08:53.550 | + jobs=
2014-07-16 11:08:53.550 | + [[ -n '' ]]
2014-07-16 11:08:53.550 | + exit 1
[stack@stack devstack]$




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 0.0.0.0:5000

2014-07-16 Thread Rich Megginson

On 07/16/2014 09:10 AM, Morgan Fainberg wrote:

--
From: Rich Megginson rmegg...@redhat.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: July 16, 2014 at 08:08:00
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [devstack][keystone] (98)Address already in use: 
make_sock: could not bind to address [::]:5000  0.0.0.0:5000

SNIP


Another problem with port 5000 in Fedora, and probably more recent
versions of RHEL, is the selinux policy:
  
# sudo semanage port -l|grep 5000

...
commplex_main_port_t tcp 5000
commplex_main_port_t udp 5000
  
There is some service called commplex that has already claimed port

5000 for its use, at least as far as selinux goes.
  

Wouldn’t this also affect the eventlet-based Keystone using port 5000?


Yes, it should.


This is not an apache-specific related issue is it?


No, afaict.



—Morgan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] sink listeners for fixed and floating IP addresses

2014-04-30 Thread Rich Megginson
designate-sink currently ships with a nova_fixed handler, which listens 
for nova events compute.instance.create.end and 
compute.instance.delete.start, and a neutron_floatingip for events 
floatingip.update.end and floatingip.delete.start.


1) is it correct to say that nova_fixed is for internal IP addresses 
(for intra-cloud networking) and that neutron_floatingip is for 
external IP addresses (for access from outside the cloud)?
2) nova can also assign and remove floating IP addresses (nova 
add-floating-ip/remove-floating-ip) - should we have listeners for nova 
network.floating_ip.associate and network.floating_ip.disassociate events?

3) is there a difference between nova and neutron floating IP addresses?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova][Designate][L3][IPv6] Discussion about Cross-Project Integration of DNS at Summit

2014-04-29 Thread Rich Megginson

On 04/29/2014 06:59 PM, Collins, Sean wrote:

Count me in!


+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] Workshop Summary - Austin, TX January 2014

2014-01-30 Thread Rich Megginson

Designate (OpenStack DNSaaS) Workshop - Austin, TX - January 27-29, 2014

hosted by Rackspace

= Development Process Review =

Adrian Otto gave an overview of the Solum development process.  One 
interesting idea is to keep commits posted for review small, under 200 
lines, to ease reviews and encourage more reviewers.  Designate team 
will review commits to see if that is a reasonable size or if larger 
would be ok.  Everyone should spend some time (e.g. an hour per day) to 
review commits.


Overall,  the core interest is making the development process more 
approachable from outsiders, some recommendations that came from this 
discussion are:


* Limit the scope and size of both blueprints are commits.
   This is generally just good engineering practice.
   Makes code-reviews more approachable
   Makes tackling blueprints more approachable without intimate 
knowledge of Designate


* Review in-progress blueprint status during the weekly meeting.
* Document contribution process
   Processes for bugs and blueprints
   How to submit code via Gerrit / Jenkins
* Make use of the openstack-dev mailing list for team discussions where 
appropriate


= Blueprint Review =

Major features for Icehouse are v2 API, Server Pools, and blacklists.  
Some of the new API features are support for the hierarchical nature of 
DNS (zones, pools, servers, records), support for bulk operations, 
support for bulk load of existing BIND files, and different ways to 
support searching (e.g. have a /search? URL or require user to provide 
/zones etc. hierarchy).  Discussed how to support doing transacted updates.


= Mini-DNS =

One of the problems is how to do an N-phase commit over all of the 
backend DNS servers.  The proposed solution is to create a mini-dns 
server inside of designate.  Then all designate has to do is send a 
standard DNS NOTIFY request to each backend, and have each backend 
asynchronously initiate a DNS AXFR to get the latest updates.  This 
should support AXFR at first, then IXFR (incremental) to decrease 
latency and bandwidth.  IXFR will depend on the Review changes and 
Rollback 
https://blueprints.launchpad.net/designate/+spec/point-in-time-restore-zone 
blueprint.  Existing DNS libraries (e.g. dnspython) and a from scratch 
approach will be reviewed for the Mini-DNS implementation, Mini-DNS will 
not be a full-fledged DNS server.


= Testing =

Right now testing is done in shell via the designate plugin for 
devstack.  Work on a better testing framework is being gated by 
incubation status.  If designate is incubated, it will be easy to use 
openstack tempest.  Unfortunately, there does not seem to be a way to 
plug-in to tempest.


= Global Load Balancing =

There was a presentation from the Rackspace load balancing team about 
global load balancing and how it might fit in with designate, or with 
some of the other related projects such as Neutron LBaaS and Libra.


= Detailed Minutes =

https://etherpad.openstack.org/p/DesignateAustinWorkshop2014-01

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Introduction: Rich Megginson - Designate project

2014-01-15 Thread Rich Megginson
Hello.  My name is Rich Megginson.  I am a Red Hat employee interested 
in working on Designate (DNSaaS), primarily in the areas of integration 
with IPA DNS, DNSSEC, and authentication (Keystone).


I've signed up for the openstack/launchpad/gerrit accounts.

Be seeing you (online).

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev