Re: [openstack-dev] Hyper-V 2008 R2 support

2015-08-09 Thread Henry Nash
Hi

So adding a deprecation warning but saying the “the code is there but not 
tested” in Liberty isn’t really doing it right.  The deprecation warning should 
come in a release where the code is still tested and working (so there is no 
danger in breaking customers), but they are warned that they need to change 
what they are doing by a certain release, or it may no longer work.

Henry
 On 4 Aug 2015, at 16:28, Alessandro Pilotti apilo...@cloudbasesolutions.com 
 wrote:
 
 
 On 04 Aug 2015, at 17:56, Daniel P. Berrange berra...@redhat.com wrote:
 
 On Tue, Aug 04, 2015 at 02:34:19PM +, Alessandro Pilotti wrote:
 Hi guys,
 
 Just a quick note on the Windows editions support matrix updates for the 
 Nova
 Hyper-V driver and Neutron networking-hyperv ML2 agent:  
 
 We are planning to drop legacy Windows Server / Hyper-V Server 2008 R2 
 support
 starting with Liberty.
 
 Windows Server / Hyper-V Server 2012 and above will continue to be 
 supported.
 
 What do you mean precisely by drop support here ?  Are you merely no longer
 testing it, or is Nova actually broken with Hyper-V 2k8 R2  in Liberty ?
 
 Generally if we intend to drop a hypervisor platform we'd expect to have a
 deprecation period for 1 cycle where Nova would print out a warning message
 on startup to alert administrators if using the platform that is intended
 to be dropped. This gives them time to plan a move to a newer platform
 before we drop the support.
 
 The plan is to move the Hyper-V specific code to a new Oslo project during the
 early M cycle and as part of the move the 2008 R2 specific code will be 
 dropped.
 This refers to the OS specific interaction layer (the *utils modules)
 which are mostly shared across multiple projects (nova, cinder,
 networking-hyperv, ceilometer, etc).
 
 Contextually the corresponding code will be proposed for removal in Nova,
 replacing it with the new oslo dependency.
 
 The 2008 R2 code will still be available in Liberty, although untested.
 A deprecation warning can surely be added to the logs.
 
 Alessandro
 
 
 Regards,
 Daniel
 -- 
 |: http://berrange.com http://berrange.com/  -o-
 http://www.flickr.com/photos/dberrange/ 
 http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org http://libvirt.org/  -o- 
 http://virt-manager.org http://virt-manager.org/ :|
 |: http://autobuild.org http://autobuild.org/   -o- 
 http://search.cpan.org/~danberr/ http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org http://entangle-photo.org/   -o-   
 http://live.gnome.org/gtk-vnc http://live.gnome.org/gtk-vnc :|
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Scheduler meeting, 2015.08.10

2015-08-09 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Sorry, I'm going to have to miss this one. Got some engineers coming
to re-level my house. :)

I don't have anything much to discuss from my end, as I've been
heads-down on the v3 cleanup stuff.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJVx4T3AAoJEKMgtcocwZqLFpEP/jLWl8D+nA3PMorGcX2phlqf
5AXAP0jI6/RIgxo6sdEOH2ziDKXu1UziZf0HvTpzUEyCNMx06La7kydTZHngRdeK
kWwmrx/s0jT9OtVil+sWtb0ZFYOEfcrR3hUOlnnnlF27XafAIj/ULpB2+TmLhoJP
p70fl/I/gS+TupYtiKeM5m/9heYsnb7ry0BjvvDhktoxJgagjDn0H8PO4uoH57Jc
ZD9HlM9t7qAT0GazRA4kqXbYam0WqROJGPbjbwI7bdWt0UqKLI1Bp+J8yFS6N1TE
JESr4jvItGH4Xu998gTXTGbRTBQzMSHn1Pa3BC+lhW+S1PCQ0w/HYGghyAWmjVMt
uo2teFqlc7+tJiBDNkHppdQX5s7ZCGqBmF1bn/Gde0H2PCtn6Vpd38ZIn7Imk3sB
YpH9D801CJTjUVsWOONYBJ3dJ1lxJpnQMJwdQf7mf4CIdnPvG3pg6+I9lpCXDGyA
Uo7bz4ipLPK5P7s6Ef2bCxZziZbRew/1DfcQoR3pP/Rv2J4CVOobKJMH79KA6Ro0
sUYcHLt4AZaATKN5EYOZMlNo5gjxir3uF3eVYbxsc9iUQD5yA6SmDL8ItoxZX8iC
3HwVEnWj9j+vrpBVmXYY0NeRhcbIf2dfsndEkY+4kUacwLQt1+1B+EoFVUuZLj+o
kmBEx3D/m2+b2Sd1R2XV
=kO1q
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-08-09 Thread Jay Pipes

On 07/28/2015 09:50 PM, Ryan Moats wrote:

If that's the case, then I'd say let's just solve this right way and
create a new construct rather...

Ryan Moats

Kevin Benton blak...@gmail.com wrote on 07/28/2015 06:44:53 PM:

  From: Kevin Benton blak...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Date: 07/28/2015 06:46 PM
  Subject: Re: [openstack-dev] [Neutron][L3] Representing a networks
  connected by routers
 
  We need to work on that code quite a bit anyway for other features
  (get me a network, VLAN trunk ports) so adding a different parameter
  shouldn't be bad. Even if Nova doesn't initially buy in, we can
  always pre-create the port and pass it to Nova boot as a UUID.
 
  On Tue, Jul 28, 2015 at 6:15 AM, Ryan Moats rmo...@us.ibm.com wrote:
  Kevin, doesn't this in itself create technical debt on the nova side
  in the sense of what an instance attaches to?
  I agree that it looks like less technical debt than conditionally
  redefining a network, but without nova buy-in, it looks
  like a non-starter...
 
  Ryan
 
  Kevin Benton blak...@gmail.com wrote on 07/28/2015 02:15:13 AM:
 
  [snip]
 
   I would rather see something to reference a group of subnets that
   can be used for floating IP allocation and port creation in lieu of
   a network ID than the technical debt that conditionally redefining a
   network will bring.


After reading through this thread, I have to agree with Kevin here. A 
new construct for an L3 network seems like a much better long-term 
solution, even if that means a little extra coordination work between 
Nova and Neutron.


The things that Kevin listed that would need conditional logic in 
Neutron is a good indication in my mind that adding a new construct 
(versus modifying the existing L2 network construct in Neutron) is the 
better plan.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Test cases failing for the current Barbican Kilo code

2015-08-09 Thread Juan Antonio Osorio
It only runs the unit tests.

BR
On 9 Aug 2015 02:19, Asha Seshagiri asha.seshag...@gmail.com wrote:

 Thanks a lot Juan for your response:)
 One question I had was the barbican.sh script would run only the unit
 tests after the barbican installation right.
 The functional tests needs to be runned explicitly using tox utility.

 Thanks and Regards,
 Asha Seshagiri

 On Sat, Aug 8, 2015 at 3:45 AM, Juan Antonio Osorio jaosor...@gmail.com
 wrote:

 The stable/kilo is currently having issues with tests (both stable and
 functional) this CR https://review.openstack.org/#/c/205059/ is meant to
 fix that, and currently it does fix the unit and doc tests. But I haven't
 been able to fix the functional tests. I'm about to start travelling so I
 won't be able to fix it soon. But hopefully Doug (redrobot) and Ade will
 take it over.

 BR
 On 8 Aug 2015 01:42, Asha Seshagiri asha.seshag...@gmail.com wrote:

 Hi All ,

 Would like to know if the current kilo branch of Barbican is stable .
 Tried to install the current kilo version of Barbican code .
 Barbican installation is successful but test cases are failing.

 Please find the list of commands below :

 [root@client-barbican2 barbican]# git checkout -b kilo
 origin/stable/kilo
 Branch kilo set up to track remote branch stable/kilo from origin.
 Switched to a new branch 'kilo'
   [root@client-barbican2 barbican]# git branch
 * kilo
 master

 When I ran bin/barbican.sh , Barbican was successfully installed , but
 the test cases are failing

 Barbican is installed successfully but unit test seems to be failing
   Installing collected packages: barbican
 Running setup.py develop for barbican
 Successfully installed barbican
 running testr


 FAILED (id=0, failures=48, skips=6)
 error: testr failed (1)
 Starting barbican...

 PFA logs for which the test cases are failing.
 Any help would highly be appreciated.

 *Thanks and Regards,*
 *Asha Seshagiri*


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 *Thanks and Regards,*
 *Asha Seshagiri*

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Puppet] Deploying OpenStack with Puppet modules on Docker with Heat

2015-08-09 Thread Steve Baker

On 06/08/15 06:29, Dan Prince wrote:

Hi,

There is a lot of interest in getting support for container based
deployment within TripleO and many different ideas and opinions on how
to go about doing that.

One idea on the table is to use Heat to help orchestrate the deployment
of docker containers. This would work similar to our tripleo-heat
-templates implementation except that when using docker you would swap
in a nested stack template that would configure containers on
baremetal. We've even got a nice example that shows what a
containerized TripleO overcloud might look like here [1]. The approach
outlines how you might use kolla docker containers alongside of the
tripleo-heat-templates to do this sort of deployment.

This is all cool stuff but one area of concern is how we do the actual
configuration of the containers. The above implementation relies on
passing environment variables into kolla built docker containers which
then self configure all the required config files and start the
service. This sounds like a start... but creating (and maintaining)
another from scratch OpenStack configuration tool isn't high on my list
of things to spend time on. Sure there is already a kolla community
helping to build and maintain this configuration tooling (mostly
thinking config files here) but this sounds a bit like what tripleo
-image-elements initially tried to do and it turns out there are much
more capable configuration tools out there.

Since we are already using a good bit of Puppet in tripleo-heat
-templates the idea came up that we would try to configure Docker
containers using Puppet. Again, here there are several ideas in the
Puppet community with regards to how docker might best be configured
with Puppet. Keeping those in mind we've been throwing some ideas out
on an etherpad here [2] that describes using Heat for orchestration,
Puppet for configuration, and Kolla docker images for containers.

A quick outline of the approach is:

-Extend the heat-container-agent [3] that runs os-collect-config and
all the required hooks we require for deployment. This includes docker
-compute, bash scripts, and Puppet. NOTE: As described in the etherpad
I've taken to using DIB to build this container. I found this to be
faster from a TripleO development baseline.

-To create config files the heat-container-agent would run a puppet
manifest for a given role and generate a directory tree of config files
(/var/lib/etc-data for example).

-We then run a docker-compose software deployment that mounts those
configuration file(s) into a read only volume and uses them to start
the containerized service.

The approach could look something like this [4]. This nice thing about
this is that it requires no modification to OpenStack Puppet modules.
We can use those today, as-is. Additionally, although Puppet runs in
the agent container we've created a mechanism to set all the resources
to noop mode except for those that generate config files. And lastly,
we can use exactly the same role manifest for docker that we do for
baremetal. Lots of re-use here... and although we are disabling a lot
of Puppet functionality in setting all the non-config resources to noop
the Kolla containers already do some of that stuff for us (starting
services, etc.).
This sounds like a viable approach, my only suggestion would be for 
there to be an option to build a puppet-container-agent which contains 
only puppet (not the heat hook too). This could allow the 
openstack-puppet and kolla communities to collaborate quickly without 
pulling in the whole tripleo stack. Then some simple docker-compose (or 
whatever) templates could be written to bring up puppet-container-agent 
with a given manifest  hieradata, then bring up a single node kolla 
container based cloud. This would be useful for CI and local development 
of the puppet modules supporting containers.


Then heat-container-agent can be puppet-container-agent plus the heat 
hook tooling.





All that said (and trying to keep this short) we've still got a bit of
work to do around wiring up externally created config files to kolla
build docker containers. A couple of issues are:

-The external config file mechanism for Kolla containers only seems to
support a single config file. Some services (Neutron) can have multiple
files. Could we extend the external config support to use multiple
files?

-If a service has multiple files kolla may need to adjust its service
startup script to use multiple files. Perhaps a conf.d approach would
work here?

-We are missing published version of some key kolla containers. Namely
openvswitch and the neutron-openvswitch-agent for starters but I'd also
like to have a Ceilometer agent and SNMP agent container as well so we
have feature parity with the non-docker compute role.

Once we have solutions for the above I think we'll be very close to a
fully dockerized compute role with TripleO heat templates. From there
we can expand the idea to cover other roles 

Re: [openstack-dev] [Heat] OS::Neutron::Port fails to set security group by name, no way to retrieve group ID from Neutron::SecurityGroup

2015-08-09 Thread jason witkowski
Steve,

There is no error.  Heat reports a successful build with no issues.  I've
attached the neutron port-show as well as the full heat engine logs for a
build of the stack start to end.

http://paste.openstack.org/show/412313/ - Heat Engine logs
http://paste.openstack.org/show/412314/ - neutron port-show on newly
created interface


-Jason

On Sun, Aug 9, 2015 at 6:51 PM, Steve Baker sba...@redhat.com wrote:

 On 08/08/15 01:51, jason witkowski wrote:

 Thanks for the replies guys.  The issue is that it is not working.  If
 you take a look at the pastes I linked from the first email I am using the
 get_resource function in the security group resource. I am not sure if it
 is not resolving to an appropriate value or if it is resolving to an
 appropriate value but then not assigning it to the port. I am happy to
 provide any more details or examples but I'm not sure what else I can do
 but provide the configuration examples I am using that are not working?
 It's very possible my configurations are wrong but I have scoured the
 internet for any/all examples and it looks like what I have should be
 working but it is not.


 Can you provide details of what the actual error is, plus the output of
 neutron port-show for that port?


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][infra] - py34 tests, devstack/base trusty and how does the neutron check pipeline choose between them (oh my)...

2015-08-09 Thread Jeremy Stanley
On 2015-08-09 17:40:33 -0500 (-0500), Ryan Moats wrote:
 AFAICT, the key difference between these two signatures is the image loaded
 for testing. The first used devstack-trusty
[...]
 http://logs.openstack.org/05/144205/3/check/gate-neutron-python34/3f6f7bf/console.html.gz
[...]

This was due to a bug in nodepool, which has been causing it to
occasionally double-register some workers and misidentify them
because they end up with the same IP addresses as previous deleted
workers. I've restarted nodepoold with the
https://review.openstack.org/210149 fix in place, so this should (in
theory!) stop occurring now.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][infra] - py34 tests, devstack/base trusty and how does the neutron check pipeline choose between them (oh my)...

2015-08-09 Thread Ryan Moats


I've spent the back half of this week chasing setup failures in the voting
py34 job that is part of the neutron check queue. Here are my two examples:
[1] is the signature that I'm trying to avoid, and [2] is a signature that
I'm happier with because it is more of a real failure...

AFAICT, the key difference between these two signatures is the image loaded
for testing. The first used devstack-trusty and the second used
bare-trusty-1438870493.template.openstack.org. Now, I may be wrong,
but it looks to me like devstack-trusty doesn't have the same packages
installed, as it tries to build
various needed extensions from source code and fails on missing the
python3-dev package.  OTOH, the bare-trusty
image is quite happy at this point.

What isn't quite clear to me (yet) from looking at the
openstack-infra/project-config setup is how the different images are
chosen, so I'm not sure yes if the selection of devstack-trusty for [1] is
the bug,
or the fact that devstack-trusty is missing pieces that bare-trusty has is
the bug.

Looking for help from knowledgeable people about how to go about getting
this addressed.

Thanks,
Ryan Moats
IRC: regXboi

[1]
http://logs.openstack.org/05/144205/3/check/gate-neutron-python34/3f6f7bf/console.html.gz
[2]
http://logs.openstack.org/89/160289/6/check/gate-neutron-python34/ed3ccec/console.html.gz
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

2015-08-09 Thread Steve Baker

On 07/08/15 06:56, Fox, Kevin M wrote:

Heat templates so far seems to be a place to dump examples for showing off how 
to use specific heat resources/features.

Are there any intentions to maintain production ready heat templates in it? 
Last I asked the answer seemed to be no.

If I misunderstood, heat-templates would be a logical place to put them then.

Historically heat-templates has avoided hosting production-ready 
templates, but this has purely been due to having the resources 
available to maintain them.


If a community emerged who were motivated to author, maintain and 
support the infrastructure which tests these templates then I think they 
would benefit from being hosted in the heat-templates repository. It 
sounds like such a community is coalescing around the app-catalog project.


Production-ready templates could end up somewhere like 
heat-templates/hot/app-catalog. If this takes off then heat-templates 
can be assigned its own core team so that more than just heat-core could 
approve these templates.




From: Zane Bitter [zbit...@redhat.com]
Sent: Thursday, August 06, 2015 11:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

On 06/08/15 13:53, Christopher Aedo wrote:

Today during the app-catalog IRC meeting we talked about hosting Heat
templates for contributors.  Right now someone who wants to create
their own templates can easily self-host them on github, but until
they get people pointed at it, nobody will know about their work on
that template, and getting guidance and feedback from all the people
who know Heat well takes a fair amount of effort.

What do you think about us creating a new repo (app-catalog-heat
perhaps), and collectively we could encourage those interested in
contributing Heat templates to host them there?  Ideally members of
the Heat community would become reviewers of the content, and give
guidance and feedback.  It would also allow us to hook into OpenStack
CI so these templates could be tested, and contributors would have a
better sense of the utility/portability of their templates.  Over time
it could lead to much more exposure for all the useful Heat templates
people are creating.

Thoughts?

Already exists:

https://git.openstack.org/cgit/openstack/heat-templates/

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Moving instack upstream

2015-08-09 Thread Steve Baker

On 07/08/15 00:12, Dan Prince wrote:

On Thu, 2015-07-23 at 07:40 +0100, Derek Higgins wrote:

See below

On 21/07/15 20:29, Derek Higgins wrote:

Hi All,
 Something we discussed at the summit was to switch the focus of
tripleo's deployment method to deploy using instack using images
built
with tripleo-puppet-elements. Up to now all the instack work has
been
done downstream of tripleo as part of rdo. Having parts of our
deployment story outside of upstream gives us problems mainly
because it
becomes very difficult to CI what we expect deployers to use while
we're
developing the upstream parts.

Essentially what I'm talking about here is pulling instack
-undercloud
upstream along with a few of its dependency projects (instack,
tripleo-common, tuskar-ui-extras etc..) into tripleo and using them
in
our CI in place of devtest.

Getting our CI working with instack is close to working but has
taken
longer then I expected because of various complications and
distractions
but I hope to have something over the next few days that we can use
to
replace devtest in CI, in a lot of ways this will start out by
taking a
step backwards but we should finish up in a better place where we
will
be developing (and running CI on) what we expect deployers to use.

Once I have something that works I think it makes sense to drop the
jobs
undercloud-precise-nonha and overcloud-precise-nonha, while
switching
overcloud-f21-nonha to use instack, this has a few effects that
need to
be called out

1. We will no longer be running CI on (and as a result not
supporting)
most of the the bash based elements
2. We will no longer be running CI on (and as a result not
supporting)
ubuntu

One more side effect is that I think it also means we no longer have
the capability to test arbitrary Zuul refspecs for projects like Heat,
Neutron, Nova, or Ironic in our undercloud CI jobs. We've relied on the
source-repositories element to do this for us in the undercloud and
since most of the instack stuff uses packages I think we would loose
this capability.

I'm all for testing with packages mind you... would just like to see us
build packages for any projects that have Zuul refspecs inline, create
a per job repo, and then use that to build out the resulting instack
undercloud.

This to me is the biggest loss in our initial switch to instack
undercloud for CI. Perhaps there is a middle ground here where instack
(which used to support tripleo-image-elements itself) could still
support use of the source-repositories element in one CI job until we
get our package building processes up to speed?

/me really wants 'check experimental' to give us TripleO coverage for
select undercloud projects
If Derek is receptive, I would find it useful if Delorean became a 
stackforge/openstack hosted project with better support for building 
packages from local git trees rather than remote checkouts.


With a bit of hackery I was doing this for a while, developing features 
locally on heat and other repos, then deploying an undercloud from a 
locally hosted delorean repo.


This would help getting CI working with Zuul refspecs, but it may be 
what Dan was meaning anyway when he said get our package building 
processes up to speed

Should anybody come along in the future interested in either of
these
things (and prepared to put the time in) we can pick them back up
again.
In fact the move to puppet element based images should mean we can
more
easily add in extra distros in the future.

3. While we find our feet we should remove all tripleo-ci jobs from
non
tripleo projects, once we're confident with it we can explore
adding our
jobs back into other projects again

Nothing has changed yet, I order to check we're all on the same
page
this is high level details of how I see things should proceed so
shout
now if I got anything wrong or you disagree.

Ok, I have a POC that has worked end to end in our CI environment[1],

there are a *LOT* of workarounds in there so before we can merge it I

need to clean up and remove some of those workarounds and todo that a

few things need to move around, below is a list of what has to happen

(as best I can tell)

1) Pull in tripleo-heat-template spec changes to master delorean
We had two patches in the tripleo-heat-template midstream packaging
that
havn't made it into the master packaging, these are
https://review.gerrithub.io/241056 Package firstboot and extraconfig
templates
https://review.gerrithub.io/241057 Package environments and newtork
directories

2) Fixes for instack-undercloud (I didn't push these directly incase
it
effected people on old versions of puppet modules)
https://github.com/rdo-management/instack-undercloud/pull/5

3) Add packaging for various repositories into openstack-packaging
I've pulled the packaging for 5 repositories into
https://github.com/openstack-packages
https://github.com/openstack-packages/python-ironic-inspector-client
https://github.com/openstack-packages/python-rdomanager-oscplugin

Re: [openstack-dev] [Heat] OS::Neutron::Port fails to set security group by name, no way to retrieve group ID from Neutron::SecurityGroup

2015-08-09 Thread Steve Baker

On 08/08/15 01:51, jason witkowski wrote:
Thanks for the replies guys.  The issue is that it is not working.  If 
you take a look at the pastes I linked from the first email I am using 
the get_resource function in the security group resource. I am not 
sure if it is not resolving to an appropriate value or if it is 
resolving to an appropriate value but then not assigning it to the 
port. I am happy to provide any more details or examples but I'm not 
sure what else I can do but provide the configuration examples I am 
using that are not working?  It's very possible my configurations are 
wrong but I have scoured the internet for any/all examples and it 
looks like what I have should be working but it is not.



Can you provide details of what the actual error is, plus the output of 
neutron port-show for that port?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable is hosed

2015-08-09 Thread Robert Collins
On 8 August 2015 at 12:45, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:
 What I do know is we need to be better about bumping the minor version in a
 release rather than the patch version all of the time - we've kind of
 painted ourselves into a corner a few times here with leaving no wiggle room
 for patch releases on stable branches.

Right: any non-bugfix change should be a minor version bump. Most
(perhaps all?) dependency changes should be a minor bump. Some may
even qualify as major bumps.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable is hosed

2015-08-09 Thread Robert Collins
On 8 August 2015 at 08:52, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:
 Well it's a Friday afternoon so you know what that means, emails about the
 stable branches being all busted to pieces in the gate.

 Tracking in the usual place:

 https://etherpad.openstack.org/p/stable-tracker

 Since things are especially fun the last two days I figured it was time for
 a notification to the -dev list.

 Both are basically Juno issues.

 1. The large ops job is busted because of some uncapped dependencies in
 python-openstackclient 1.0.1.

 https://bugs.launchpad.net/openstack-gate/+bug/1482350

 The fun thing here is g-r is capping osc=1.0.1 and there is already a 1.0.2
 version of osc, so we can't simply cap osc in a 1.0.2 and raise that in g-r
 for stable/juno (we didn't leave ourselves any room for bug fixes).

 We talked about an osc 1.0.1.1 but pbr=0.11 won't allow that because it
 breaks semver.

 The normal dsvm jobs are OK because they install cinder and cinder installs
 the dependencies that satisfy everything so we don't hit the osc issue.  The
 large ops job doesn't use cinder so it doesn't install it.

 Options:

 a) Somehow use a 1.0.1.post1 version for osc.  Would require input from
 lifeless.

This is really tricky. postN versions are a) not meant to change
anything functional (see PEP-440) and b) are currently mapped to devN
by pbr for compatibility with pbr 0.10.x which had the unenviable task
of dealing with PEP-440 going live in pip.

 b) Install cinder in the large ops job on stable/juno.

That would seem fine IMO.

 c) Disable the large ops job for stable/juno.

I'd rather not do this.

 2. grenade on kilo blows up because python-neutronclient 2.3.12 caps
 oslo.serialization at =1.2.0, keystonemiddleware 1.5.2 is getting pulled in
 which pulls in oslo.serialization 1.4.0 and things fall apart.

 https://bugs.launchpad.net/python-neutronclient/+bug/1482758

 I'm having a hard time unwinding this one since it's a grenade job.  I know
 the failures line up with the neutronclient 2.3.12 release which caps
 requirements on stable/juno:

 https://review.openstack.org/#/c/204654/.

 Need some help here.

I think it would be entirely appropriate to bump the lower bound of
neutronclient in kilo: running with the version with juno caps *isn't
supported* full stop, across the board. Its a bug that we have a bad
lower bound there.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable is hosed

2015-08-09 Thread Matt Riedemann



On 8/9/2015 7:55 PM, Matt Riedemann wrote:



On 8/9/2015 7:09 PM, Matt Riedemann wrote:



On 8/9/2015 6:57 PM, Robert Collins wrote:

On 8 August 2015 at 12:45, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:

What I do know is we need to be better about bumping the minor
version in a
release rather than the patch version all of the time - we've kind of
painted ourselves into a corner a few times here with leaving no
wiggle room
for patch releases on stable branches.


Right: any non-bugfix change should be a minor version bump. Most
(perhaps all?) dependency changes should be a minor bump. Some may
even qualify as major bumps.

-Rob



This is my hack attempt to fix grenade:

https://review.openstack.org/#/c/210870/

I'm not well versed in grenade so if there is a better way to tell
devstack to install that specific range of neutronclient on the target
side I'm all ears.



I'm not totally sure if this is due to neutronclient being 2.4.0 in
grenade kilo or not, could use some help from neutron people:

http://logs.openstack.org/70/210870/1/check/gate-grenade-dsvm-neutron/20f794e/logs/new/screen-q-svc.txt.gz?level=TRACE




Yuck, looks like another issue that's been around since last week:

http://goo.gl/Jwib2w

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova-scheduler] Scheduler sub-group meeting - Agenda 8/10

2015-08-09 Thread Dugger, Donald D
Meeting on #openstack-meeting-alt at 1400 UTC (8:00AM MDT)



1)  Liberty patches - 
https://etherpad.openstack.org/p/liberty-nova-priorities-tracking

2)  CPU feature representation - follow up from the mid-cycle

3)  Opens


--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How should we expose host capabilities to the scheduler

2015-08-09 Thread Tony Breeds
On Mon, Aug 03, 2015 at 04:49:59PM +0100, Alexis Lee wrote:
 Dugger, Donald D said on Mon, Aug 03, 2015 at 05:39:49AM +:
  Also note that, although many capabilities can be represented by
  simple key/value pairs (e.g. the presence of a specific special
  instruction) that is not true for all capabilities (e.g. Numa topology
  doesn't really fit into this model) and even the need to specify
  ranges of values (more that x byes of RAM, less than y bandwidth
  utilization) makes things more complex.
 
 I'm glad you brought up ranges. I don't want to get too exotic
 prematurely but these certainly seem useful.
 
  Without going into the solution space the first thing we need to do is
  make sure we know what the requirements are for exposing host
  capabilities.  At a minimum we need to:
  
  1)  Enumerate the capabilities.  This will involve both
  quantitative values (amount of RAM, amount of disk, ...) and Boolean
  (magic instructions present).  Also, there will be static capabilities
  that are discovered at boot time and don't change afterwards and
  dynamic capabilities that vary during node operation.
  
  2)  Expose the capabilities to both users and operators.
 
 As discussed at the midcycle, part of this is presenting a
 somewhat-uniform API. I was fairly sleepy but I seem to recall PowerVM
 does not publish an exhaustive list of its capabilities? Do we need a
 facade which lists implicit capabilities for newer CPUs?
 
 We might also want to abstract over similar capabilities in Intel and
 PowerVM?

So just for clarity POWER CPUs do expose this information but it isn't
exposed via the CPU flags the way intel does.

A /proc/cpuinfo for a current POWER box looks like:
---
snip

processor   : 152
cpu : POWER8E (raw), altivec supported
clock   : 3690.00MHz
revision: 2.1 (pvr 004b 0201)

timebase: 51200
platform: PowerNV
model   : 8247-22L
machine : PowerNV 8247-22L
firmware: OPAL v3
---

Between the firmware version and the pvr value you can work out which feature
the CPU supports.  Asking for SSE clearly only makes sense on intel and
mapping that to anything on POWER would be strange at best.


IIRC a primary motivator for this is to say this os image has been built with
SSE so only boot it on a compute host with that support.  As that's an image
feature it dosn't make sense to run that on POWER
 
Yours Tony.


pgpiTyKQ_4u9p.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-09 Thread Jamie Lennox


- Original Message -
 From: Mehdi Abaakouk sil...@sileht.net
 To: openstack-dev@lists.openstack.org
 Sent: Friday, August 7, 2015 1:57:54 AM
 Subject: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares
 
 Hi,
 
 I want to share with you some problems I have recently encountered with
 openstack middlewares and oslo.config.
 
 The issues
 --
 
 In project Gnocchi, I would use oslo.middleware.cors, I have expected to
 just put the name of the middleware to the wsgi pipeline, but I can't.
 The middlewares only works if you pass the oslo_config.cfg.ConfigOpts()
 object or via 'paste-deploy'... Gnocchi doesn't use paste-deploy, so
 I have to modify the code to load it...
 (For the keystonemiddleware, Gnocchi already have a special
 handling/hack to load it [1] and [2]).
 I don't want to write the same hack for each openstack middlewares.
 
 
 In project Aodh (ceilometer-alarm), we recently got an issue with
 keystonemiddleware since we remove the usage of the global object
 oslo_config.cfg.CONF. The middleware doesn't load its options from the
 config file of aodh anymore. Our authentication is broken.
 We can still pass them through paste-deploy configuration but this looks
 a method of the past. I still don't want to write a hack for each
 openstack middlewares.
 
 
 Then I have digged into other middlewares and applications to see how
 they handle their conf.
 
 oslo_middlewarre.sizelimit and oslo_middlewarre.ssl take options only
 via the global oslo_config.cfg.CONF. So they are unusable for application
 that doesn't use this global object.
 
 oslo_middleware.healthcheck take options as dict like any other python
 middleware. This is suitable for 'paste-deploy'. But doesn't allow
 configuration via oslo.config, doesn't have a strong config options
 type checking and co.
 
 Zaqar seems got same kind of issue about keystonemiddleware, and just
 write a hack to workaround the issue (monkeypatch the cfg.CONF of
 keystonemiddleware with their local version of the object [3] and then
 transform the loaded options into a dict to pass them via the legacy
 middleware dict options [4]) .
 
 Most applications, just still use the global object for the
 configuration and don't, yet, see those issues.
 
 
 All of that is really not consistent.
 
 This is confusing for developer to have some middlewares that need pre-setup,
 enforce them to rely on global python object, and some others not.
 This is confusing for deployer their can't do the configuration of
 middlewares in the same way for each middlewares and each projects.
 
 But keystonemiddleware, oslo.middleware.cors,... are supposed to be wsgi
 middlewares, something that is independant of the app.
 And this is not really the case.
 
 From my point of view and what wsgi looks like generally in python, the
 middleware object should be just MyMiddleware(app, options_as_dict),
 if the middleware want to rely to another configuration system it should
 do the setup/initialisation itself.
 
 
 
 So, how to solve that ?
 
 
 Do you agree:
 
 * all openstack middlewares should load their options with oslo.config ?
   this permits type checking and all other features it provides, it's cool :)
   configuration in paste-deploy conf is thing of past
 
 * we must support local AND global oslo.config object ?
   This is an application choice not something enforced by middleware.
   The deployer experience should be the same in both case.
 
 * the middleware must be responsible of the section name in the oslo.config ?
   Gnocchi/Zaqar hack have to hardcode the section name in their code,
   this doesn't looks good.
 
 * we must support legacy python signature for WSGI object,
   MyMiddleware(app, options_as_dict) ? To be able to use paste for
   application/deployer that want it and not break already deployed things.
 
 
 I really think all our middlewares should be consistent:
 
 * to be usable by all applications without enforcing them to write crap
 around them.
 * and to made the deployer life easier.
 
 
 Possible solution:
 --
 
 I have already started to work on something that do all of that for all
 middlewares [5], [6]
 
 The idea is, the middleware should create a oslo_config.cfg.ConfigOpts()
 (instead of rely on the global one) and load the configuration file of the
 application in. oslo.config will discover the file location just with the
 name of application as usual.
 
 So the middleware can now be loaded like this:
 
 code example:
 
app = MyMiddleware(app, {oslo_config_project: aodh})
 
 paste-deploy example:
 
[filter:foobar]
paste.filter_factory = foobar:MyMiddleware.filter_factory
oslo_config_project = aodh
 
 oslo_config.cfg.ConfigOpts() will easly find the /etc/aodh/aodh.conf,
 This cut the hidden links between middleware and the application
 (through the global object).
 
 And of course if oslo_config_project is not provided, the middleware
 fallback the global 

Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-09 Thread Jamie Lennox


- Original Message -
 From: Jamie Lennox jamielen...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, August 10, 2015 12:36:14 PM
 Subject: Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares
 
 
 
 - Original Message -
  From: Mehdi Abaakouk sil...@sileht.net
  To: openstack-dev@lists.openstack.org
  Sent: Friday, August 7, 2015 1:57:54 AM
  Subject: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares
  
  Hi,
  
  I want to share with you some problems I have recently encountered with
  openstack middlewares and oslo.config.
  
  The issues
  --
  
  In project Gnocchi, I would use oslo.middleware.cors, I have expected to
  just put the name of the middleware to the wsgi pipeline, but I can't.
  The middlewares only works if you pass the oslo_config.cfg.ConfigOpts()
  object or via 'paste-deploy'... Gnocchi doesn't use paste-deploy, so
  I have to modify the code to load it...
  (For the keystonemiddleware, Gnocchi already have a special
  handling/hack to load it [1] and [2]).
  I don't want to write the same hack for each openstack middlewares.
  
  
  In project Aodh (ceilometer-alarm), we recently got an issue with
  keystonemiddleware since we remove the usage of the global object
  oslo_config.cfg.CONF. The middleware doesn't load its options from the
  config file of aodh anymore. Our authentication is broken.
  We can still pass them through paste-deploy configuration but this looks
  a method of the past. I still don't want to write a hack for each
  openstack middlewares.
  
  
  Then I have digged into other middlewares and applications to see how
  they handle their conf.
  
  oslo_middlewarre.sizelimit and oslo_middlewarre.ssl take options only
  via the global oslo_config.cfg.CONF. So they are unusable for application
  that doesn't use this global object.
  
  oslo_middleware.healthcheck take options as dict like any other python
  middleware. This is suitable for 'paste-deploy'. But doesn't allow
  configuration via oslo.config, doesn't have a strong config options
  type checking and co.
  
  Zaqar seems got same kind of issue about keystonemiddleware, and just
  write a hack to workaround the issue (monkeypatch the cfg.CONF of
  keystonemiddleware with their local version of the object [3] and then
  transform the loaded options into a dict to pass them via the legacy
  middleware dict options [4]) .
  
  Most applications, just still use the global object for the
  configuration and don't, yet, see those issues.
  
  
  All of that is really not consistent.
  
  This is confusing for developer to have some middlewares that need
  pre-setup,
  enforce them to rely on global python object, and some others not.
  This is confusing for deployer their can't do the configuration of
  middlewares in the same way for each middlewares and each projects.
  
  But keystonemiddleware, oslo.middleware.cors,... are supposed to be wsgi
  middlewares, something that is independant of the app.
  And this is not really the case.
  
  From my point of view and what wsgi looks like generally in python, the
  middleware object should be just MyMiddleware(app, options_as_dict),
  if the middleware want to rely to another configuration system it should
  do the setup/initialisation itself.
  
  
  
  So, how to solve that ?
  
  
  Do you agree:
  
  * all openstack middlewares should load their options with oslo.config ?
this permits type checking and all other features it provides, it's cool
:)
configuration in paste-deploy conf is thing of past
  
  * we must support local AND global oslo.config object ?
This is an application choice not something enforced by middleware.
The deployer experience should be the same in both case.
  
  * the middleware must be responsible of the section name in the oslo.config
  ?
Gnocchi/Zaqar hack have to hardcode the section name in their code,
this doesn't looks good.
  
  * we must support legacy python signature for WSGI object,
MyMiddleware(app, options_as_dict) ? To be able to use paste for
application/deployer that want it and not break already deployed things.
  
  
  I really think all our middlewares should be consistent:
  
  * to be usable by all applications without enforcing them to write crap
  around them.
  * and to made the deployer life easier.
  
  
  Possible solution:
  --
  
  I have already started to work on something that do all of that for all
  middlewares [5], [6]
  
  The idea is, the middleware should create a oslo_config.cfg.ConfigOpts()
  (instead of rely on the global one) and load the configuration file of the
  application in. oslo.config will discover the file location just with the
  name of application as usual.
  
  So the middleware can now be loaded like this:
  
  code example:
  
 app = MyMiddleware(app, 

Re: [openstack-dev] [api] [all] To changes-since or not to changes-since

2015-08-09 Thread hao wang
Hi, stackers

Since now we have merged filtering guideline[1], is that said we should
implement this feature according this guideline?  like this:

*GET /app/items?f_updated_at=gte:some_timestamp*

Do we have reached a consensus about this?

2015-06-19 17:07 GMT+08:00 Chris Dent chd...@redhat.com:


 There's an open question in the API-WG on whether to formalize or
 otherwise enshrine the concept of a changes-since query parameter
 on collection oriented resources across the projects. The original
 source of this concept is from Nova's API:


 http://docs.openstack.org/developer/nova/v2/polling_changes-since_parameter.html

 There are arguments for and against but we've been unable to reach a
 consensus so the agreed next step was to bring the issue to the
 mailing list so more people can hash it out and provide their input.
 The hope is that concerns or constraints that those in the group
 are not aware of will be revealed and a better decision will be
 reached.

 The basic idea of changes-since is that it can be used as a way to
 signal that the requestor is doing some polling and would like to
 ask Give me stuff that has changed since the last time I checked.
 As I understand it, for the current implementations (in Nova and
 Glance) this means including stuff that has been deleted. Repeated
 requests to the resource with a changes-since parameter gives a
 running report on the evolving state of the resource. This is intended
 to allow efficient polling[0].

 The pro camp on this likes it because this is very useful and quite
 humane: The requestor doesn't need to know the details of how the
 query is is implemented under the hood. That is, if there are
 several timestamps associated with the singular resources in the
 collection which of those are used for time comparison and which
 attributes (such as state with a value of deleted) are used to
 determine if a singular resource is included. The service gets to
 decide these things and in order for efficient polling to actually
 be achieved it needs to do in order to make effective queries of the
 data store.

 The con camp doesn't like it because it introduces magic, ambiguity
 and inconsistency into the API (when viewed from a cross-project
 perspective) and one of the defining goals of the working group is
 to slowly guide things to some measure of consistency. The
 alternate approach is to formulate a fairly rigorous query system
 for both filtering[1] and sorting[2] and use that to specify
 explicit queries that state I want resources that are newer than time
 X in timestamp attribute 'updated_at' _and_ have attribute 'state'
 that is one of 'foo', 'bar' or 'baz'.

 (I hope I have represented the two camps properly here and not
 introduced any bias. Myself I'm completely on the fence. If you
 think I've misrepresented the state of things please post a
 clarifying response.)

 The questions come down to:

 * Are there additional relevant pros and cons for the two proposals?
 * Are there additional proposals which can address the shortcomings
   in either?

 Thanks for your input.

 [0] Please try to refrain from responses on the line of ha!
 efficiency! that's hilarious! and ZOMG, polling, that's so
 last century. Everybody knows this already and it's not
 germane to the immediate concerns. We'll get to a fully message
 driven architecture next week. This week we're still working
 with what we've got.
 [1] filtering guideline proposal
 https://review.openstack.org/#/c/177468/
 [2] sorting guideline proposal
 https://review.openstack.org/#/c/145579/
 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best Wishes For You!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable is hosed

2015-08-09 Thread Matt Riedemann



On 8/9/2015 7:09 PM, Robert Collins wrote:

On 8 August 2015 at 08:52, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

Well it's a Friday afternoon so you know what that means, emails about the
stable branches being all busted to pieces in the gate.

Tracking in the usual place:

https://etherpad.openstack.org/p/stable-tracker

Since things are especially fun the last two days I figured it was time for
a notification to the -dev list.

Both are basically Juno issues.

1. The large ops job is busted because of some uncapped dependencies in
python-openstackclient 1.0.1.

https://bugs.launchpad.net/openstack-gate/+bug/1482350

The fun thing here is g-r is capping osc=1.0.1 and there is already a 1.0.2
version of osc, so we can't simply cap osc in a 1.0.2 and raise that in g-r
for stable/juno (we didn't leave ourselves any room for bug fixes).

We talked about an osc 1.0.1.1 but pbr=0.11 won't allow that because it
breaks semver.

The normal dsvm jobs are OK because they install cinder and cinder installs
the dependencies that satisfy everything so we don't hit the osc issue.  The
large ops job doesn't use cinder so it doesn't install it.

Options:

a) Somehow use a 1.0.1.post1 version for osc.  Would require input from
lifeless.


This is really tricky. postN versions are a) not meant to change
anything functional (see PEP-440) and b) are currently mapped to devN
by pbr for compatibility with pbr 0.10.x which had the unenviable task
of dealing with PEP-440 going live in pip.


b) Install cinder in the large ops job on stable/juno.


That would seem fine IMO.


c) Disable the large ops job for stable/juno.


I'd rather not do this.


2. grenade on kilo blows up because python-neutronclient 2.3.12 caps
oslo.serialization at =1.2.0, keystonemiddleware 1.5.2 is getting pulled in
which pulls in oslo.serialization 1.4.0 and things fall apart.

https://bugs.launchpad.net/python-neutronclient/+bug/1482758

I'm having a hard time unwinding this one since it's a grenade job.  I know
the failures line up with the neutronclient 2.3.12 release which caps
requirements on stable/juno:

https://review.openstack.org/#/c/204654/.

Need some help here.


I think it would be entirely appropriate to bump the lower bound of
neutronclient in kilo: running with the version with juno caps *isn't
supported* full stop, across the board. Its a bug that we have a bad
lower bound there.


On Friday, adam_g and I weren't sure what the policy was on overlapping 
upper bounds for juno and lower bounds for kilo, but yeah, my first 
reaction was that's bonkers and anything that's working that way today 
is just working as a fluke and could wedge us the same way at any point.


The reason I'd prefer to not raise the minimum required version of a 
library in stable g-r is simply for those packagers/distros that are 
basically frozen on the versions of libraries they are providing for 
kilo and I'd like to not make them arbitrarily move up just because of 
our screw ups.


Apparently in our case (the product I work on), we already ship 
neutronclient 2.4.0 in our kilo release so I guess it'd wouldn't be the 
end of the world for us.




-Rob




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Brocade CI

2015-08-09 Thread Mike Perez
People have asked me at the Cinder midcycle sprint to look at the Brocade CI
to:

1) Keep the zone manager driver in Liberty.
2) Consider approving additional specs that we're submitted before the
   deadline.

Here are the current problems with the last 100 runs [1]:

1) Not posting success or failure.
2) Not posting a result link to view logs.
3) Not consistently doing runs. If you compare with other CI's there are plenty
   missing in a day.

This CI does not follow the guidelines [2]. Please get help [3].

[1] - http://paste.openstack.org/show/412316/
[2] - 
http://docs.openstack.org/infra/system-config/third_party.html#requirements
[3] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Questions

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable is hosed

2015-08-09 Thread Matt Riedemann



On 8/9/2015 7:09 PM, Matt Riedemann wrote:



On 8/9/2015 6:57 PM, Robert Collins wrote:

On 8 August 2015 at 12:45, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:

What I do know is we need to be better about bumping the minor
version in a
release rather than the patch version all of the time - we've kind of
painted ourselves into a corner a few times here with leaving no
wiggle room
for patch releases on stable branches.


Right: any non-bugfix change should be a minor version bump. Most
(perhaps all?) dependency changes should be a minor bump. Some may
even qualify as major bumps.

-Rob



This is my hack attempt to fix grenade:

https://review.openstack.org/#/c/210870/

I'm not well versed in grenade so if there is a better way to tell
devstack to install that specific range of neutronclient on the target
side I'm all ears.



I'm not totally sure if this is due to neutronclient being 2.4.0 in 
grenade kilo or not, could use some help from neutron people:


http://logs.openstack.org/70/210870/1/check/gate-grenade-dsvm-neutron/20f794e/logs/new/screen-q-svc.txt.gz?level=TRACE

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Moving instack upstream

2015-08-09 Thread Joshua Harlow

Steve Baker wrote:

On 07/08/15 00:12, Dan Prince wrote:

On Thu, 2015-07-23 at 07:40 +0100, Derek Higgins wrote:

See below

On 21/07/15 20:29, Derek Higgins wrote:

Hi All,
Something we discussed at the summit was to switch the focus of
tripleo's deployment method to deploy using instack using images
built
with tripleo-puppet-elements. Up to now all the instack work has
been
done downstream of tripleo as part of rdo. Having parts of our
deployment story outside of upstream gives us problems mainly
because it
becomes very difficult to CI what we expect deployers to use while
we're
developing the upstream parts.

Essentially what I'm talking about here is pulling instack
-undercloud
upstream along with a few of its dependency projects (instack,
tripleo-common, tuskar-ui-extras etc..) into tripleo and using them
in
our CI in place of devtest.

Getting our CI working with instack is close to working but has
taken
longer then I expected because of various complications and
distractions
but I hope to have something over the next few days that we can use
to
replace devtest in CI, in a lot of ways this will start out by
taking a
step backwards but we should finish up in a better place where we
will
be developing (and running CI on) what we expect deployers to use.

Once I have something that works I think it makes sense to drop the
jobs
undercloud-precise-nonha and overcloud-precise-nonha, while
switching
overcloud-f21-nonha to use instack, this has a few effects that
need to
be called out

1. We will no longer be running CI on (and as a result not
supporting)
most of the the bash based elements
2. We will no longer be running CI on (and as a result not
supporting)
ubuntu

One more side effect is that I think it also means we no longer have
the capability to test arbitrary Zuul refspecs for projects like Heat,
Neutron, Nova, or Ironic in our undercloud CI jobs. We've relied on the
source-repositories element to do this for us in the undercloud and
since most of the instack stuff uses packages I think we would loose
this capability.

I'm all for testing with packages mind you... would just like to see us
build packages for any projects that have Zuul refspecs inline, create
a per job repo, and then use that to build out the resulting instack
undercloud.

This to me is the biggest loss in our initial switch to instack
undercloud for CI. Perhaps there is a middle ground here where instack
(which used to support tripleo-image-elements itself) could still
support use of the source-repositories element in one CI job until we
get our package building processes up to speed?

/me really wants 'check experimental' to give us TripleO coverage for
select undercloud projects

If Derek is receptive, I would find it useful if Delorean became a
stackforge/openstack hosted project with better support for building
packages from local git trees rather than remote checkouts.

With a bit of hackery I was doing this for a while, developing features
locally on heat and other repos, then deploying an undercloud from a
locally hosted delorean repo.


Just an fyi, but anvil is now building centos7 rpms in its gate,  from 
git checkouts (somethings its been doing for a long time but now does it 
in the gate as well), this includes all the dependencies not already 
found in EPEL:


A run from a little while ago showing the various stages:

http://logs.openstack.org/43/210643/1/check/gate-anvil-rpms-dsvm-devstack-centos7/1aa3f56/ 
(the full logs + rpmbuild logs output)...


Stages:

(1) Git checkouts @ 
http://logs.openstack.org/43/210643/1/check/gate-anvil-rpms-dsvm-devstack-centos7/1aa3f56/console.html#_2015-08-10_00_16_34_130


(2) Requirement unifying/modification @ 
http://logs.openstack.org/43/210643/1/check/gate-anvil-rpms-dsvm-devstack-centos7/1aa3f56/console.html#_2015-08-10_00_17_22_862


(3) Determination of whats already satisfiable via epel or other repos @ 
http://logs.openstack.org/43/210643/1/check/gate-anvil-rpms-dsvm-devstack-centos7/1aa3f56/console.html#_2015-08-10_00_17_41_733 



(4) Downloading using pip unsatisfied/not found dependencies @ 
http://logs.openstack.org/43/210643/1/check/gate-anvil-rpms-dsvm-devstack-centos7/1aa3f56/console.html#_2015-08-10_00_17_42_113


(5) Filtering what was downloaded (pip pulls in dependencies of 
dependencies so have to kick out things that are already satisfiable 
again) @ 
http://logs.openstack.org/43/210643/1/check/gate-anvil-rpms-dsvm-devstack-centos7/1aa3f56/console.html#_2015-08-10_00_18_51_638


(6) Creation of source rpms from pip downloaded archives @ 
http://logs.openstack.org/43/210643/1/check/gate-anvil-rpms-dsvm-devstack-centos7/1aa3f56/console.html#_2015-08-10_00_18_52_365


(7) Creation of source rpms (spec files generated here for each git repo 
from common templates...) from git checked out repositories @ 
http://logs.openstack.org/43/210643/1/check/gate-anvil-rpms-dsvm-devstack-centos7/1aa3f56/console.html#_2015-08-10_00_19_41_450


(8) Preparation stage finish @ 

Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-09 Thread Jamie Lennox


- Original Message -
 From: David Chadwick d.w.chadw...@kent.ac.uk
 To: openstack-dev@lists.openstack.org
 Sent: Sunday, August 9, 2015 12:29:49 AM
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 
 Hi Jamie
 
 nice presentation, thanks for sharing it. I have forwarded it to my
 students working on federation aspects of Horizon.
 
 About public federated cloud access, the way you envisage it, i.e. that
 every user will have his own tailored (subdomain) URL to the SP is not
 how it works in the real world today. SPs typically provide one URL,
 which everyone from every IdP uses, so that no matter which browser you
 are using, from wherever you are in the world, you can access the SP
 (via your IdP). The only thing the user needs to know, is the name of
 his IdP, in order to correctly choose it.
 
 So discovery of all available IdPs is needed. You are correct in saying
 that Shib supports a separate discovery service (WAYF), but Horizon can
 also play this role, by listing the IdPs for the user. This is the mod
 that my student is making to Horizon, by adding type ahead searching.

So my point at the moment is that unless there's something i'm missing in the 
way shib/mellon discovery works is that horizon can't. Because we forward to a 
common websso entry point there is no way (i know) for the users selection in 
horizon to be forwarded to keystone. You would still need a custom select your 
idp discovery page in front of keystone. I'm not sure if this addition is part 
of your students work, it just hasn't been mentioned yet.

 About your proposed discovery mod, surely this seems to be going in the
 wrong direction. A common entry point to Keystone for all IdPs, as we
 have now with WebSSO, seems to be preferable to separate entry points
 per IdP. Which high street shop has separate doors for each user? Or
 have I misunderstood the purpose of your mod?

The purpose of the mod is purely to bypass the need to have a shib/mellon 
discovery page on /v3/OS-FEDERATION/websso/saml2. This page is currently 
required to allow a user to select their idp (presumably from the ones 
supported by keystone) and redirect to that IDPs specific login page. When the 
response comes back from that login it returns to that websso page and we look 
at remote_ids to determine which keystone idp is handling the response from 
that site.

If we were to move that to 
/v3/OS-FEDERATION/identity_providers/{idp_id}/protocols/saml2/websso then we 
can more easily support selection from horizon, or otherwise do discovery 
without relying on shib/mellons discovery mechanism. A selection from horizon 
would forward us to the idp specific websso on keystone, which would forward to 
the idp's login page (without needing discovery because we already know the 
idp) and the response from login would go to the idp specific page negating the 
need for dealing with remote_ids.

So i'm not looking for a seperate door so much as a way to indicate that the 
user picked an IDP in horizon and i don't want to do discovery again.
 
 regards
 
 David
 
 On 07/08/2015 01:29, Jamie Lennox wrote:
  
  
  
  
  *From: *Dolph Mathews dolph.math...@gmail.com
  *To: *OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  *Sent: *Friday, August 7, 2015 9:09:25 AM
  *Subject: *Re: [openstack-dev] [Keystone] [Horizon] Federated Login
  
  
  On Thu, Aug 6, 2015 at 11:25 AM, Lance Bragstad lbrags...@gmail.com
  mailto:lbrags...@gmail.com wrote:
  
  
  
  On Thu, Aug 6, 2015 at 10:47 AM, Dolph Mathews
  dolph.math...@gmail.com mailto:dolph.math...@gmail.com wrote:
  
  
  On Wed, Aug 5, 2015 at 6:54 PM, Jamie Lennox
  jamielen...@redhat.com mailto:jamielen...@redhat.com wrote:
  
  
  
  - Original Message -
   From: David Lyle dkly...@gmail.com
   mailto:dkly...@gmail.com
   To: OpenStack Development Mailing List (not for usage
   questions) openstack-dev@lists.openstack.org
  mailto:openstack-dev@lists.openstack.org
   Sent: Thursday, August 6, 2015 5:52:40 AM
   Subject: Re: [openstack-dev] [Keystone] [Horizon]
   Federated Login
  
   Forcing Horizon to duplicate Keystone settings just makes
   everything much
   harder to configure and much more fragile. Exposing
   whitelisted, or all,
   IdPs makes much more sense.
  
   On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews 
   dolph.math...@gmail.com mailto:dolph.math...@gmail.com
   
   wrote:
  
  
  
   On Wed, Aug 

Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and glance

2015-08-09 Thread Mike Perez
On 13:07 Aug 07, Jay Pipes wrote:
 Hi Nik, some comments inline, but tl;dr I am strongly against
 returning the glance_store library to the Glance source repository.
 Explanations inline...
 
 On 08/07/2015 01:21 AM, Nikhil Komawar wrote:
 Hi,
 
 During the mid-cycle we had another proposal that wanted to put back the
 glance_store library back into the Glance repo and not leave it is as a
 separate repo/project.
 
 The questions outstanding are: what are the use cases that want it as a
 separate library?
 
 The original use cases that supported a separate lib have not had much
 progress or adoption yet.
 
 This is really only due to a lack of time to replace the current
 nova/image/download/* stuff with calls to the glance_store library.
 It's not that the use case has gone away; it's just a lack of time to
 work on it.

When Cinder wanted to integrate os-brick (initiator code library) into Nova,
Cinder folks integrated it themselves [1]. Has anyone from Glance been spending
the time to do this? If so, are there reviews you can give so we can see why
things are blocked?

  There have been complaints about overhead of
 maintaining it as a separate lib and version tracking without much gain.
 
 I don't really see much overhead in maintaining a separate lib,
 especially when it represents functionality that can be used by
 Cinder and Nova directly.

As mentioned earlier Cinder is doing os-brick for both Cinder and Nova to
consume. Other projects are planning to use it as well, and it has been a huge
win to take some complications from other projects that aren't focusing on
block storage like we are. I recommend creating a gerrit dashboard [2] to
include a separate library with Glance reviews. I'm also not sure of what
overhead there could be besides having do release.

snip

 4. cleaner api / more methods that support backend store capabilities -
 a separate library is not necessarily needed, smoother re-factor is
 possible within Glance codebase.
 
 So, here's the crux of the issue. Nova and Cinder **do not want to
 speak the Glance REST API** to either upload or download image bits
 from storage. Streaming image bits through the Glance API endpoint is
 a needless and inefficient step, and Nova and Cinder would like to
 communicate directly with the backend storage systems.
 
 glance_store IS the library that would enable Nova and Cinder to
 communicate directly with the backend storage systems. The Glance API
 will only be used by Nova and Cinder to get information *about* the
 images in backend storage, not the image bits themselves.

+1

Provide a way to talk to the implementation, and step out of the way to let it
do what it does best. This is no different than other Openstack projects.

In Cinder, image copying to a volume is a huge problem today. We have a Cinder
glance_store [2] that is being revived to leave the images stored in the block
storage backends themselves, which, oh my goodness is using os-brick!!

The block storage backends know how to do efficient image copying/cloning to
their volumes, not Glance.


[1] - https://review.openstack.org/#/c/175569/
[2] - https://review.openstack.org/#/c/166414/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable is hosed

2015-08-09 Thread Matt Riedemann



On 8/9/2015 6:57 PM, Robert Collins wrote:

On 8 August 2015 at 12:45, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

What I do know is we need to be better about bumping the minor version in a
release rather than the patch version all of the time - we've kind of
painted ourselves into a corner a few times here with leaving no wiggle room
for patch releases on stable branches.


Right: any non-bugfix change should be a minor version bump. Most
(perhaps all?) dependency changes should be a minor bump. Some may
even qualify as major bumps.

-Rob



This is my hack attempt to fix grenade:

https://review.openstack.org/#/c/210870/

I'm not well versed in grenade so if there is a better way to tell 
devstack to install that specific range of neutronclient on the target 
side I'm all ears.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Detach a volume when it's under migration

2015-08-09 Thread liuxinguo
Detach a volume when it's under migration, volume status is still in-use:

1.   create vol-1

2.   attach vol-1 to a vm

3.   migrate vol-1

4.   when vol-1 is under migration, detach vol-1

5.   after vol-1 is detached, command cinder list show that the Status of 
vol-1 is still in-use.

If 'migration_status' of the volume is not None, detach process won't update 
the status of the volume to 'available':

volume_ref = _volume_get(context, volume_id, session=session)
if not remain_attachment:
# Hide status update from user if we're performing volume migration
# or uploading it to image
if (not volume_ref['migration_status'] and
not (volume_ref['status'] == 'uploading')):
volume_ref['status'] = 'available'

So how to deal with this? Dose it means that we should not detach a volume when 
it's under migration?

Thanks for any input!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev